var/home/core/zuul-output/0000755000175000017500000000000015134237600014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134251560015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000346025315134251515020265 0ustar corecoreMSqikubelet.lognc9r~DYvA6ZF,-K$l"mklk#SӖHSd'Z]~3؋I_翪|mvşo#oVݏKf+ovpZjlC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶Y +SI211zy{߻l{] HK /%^yqf2?;OO pzM./n{?~v^zٶ'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۛއ0* T(-aD~J#'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' $ޠYx3JtQJFjc 9G8MOY:GTMce0hTYF;B6@ c$Ⱦ֠N+fD>%vz_. o~I|g\W#N%8'# $9b"r>B)K.(^$0^@hH9%!40Jm>*Nz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIoCMMQQ؏*ΧL ߁NPi?$;g&u8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx':R€A`*t) O5]/* @.yhi-cS4 6"KaFٗAw^};fs)1K MޠPBUB1J1dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkI&~ay\CJ68?%;b8&RgX2qBMoN w{:!%Piocej_H!CE|ɦS7PKi0>,A==lM9Ɍx?6^oS5!90n݌ mr"/QI&doLp4+CN(44iVz- 1 EaE nQ Ӌ_kckh>F L훨bِp1\T\=y~eA>DG.b.~"%6~;GqE,[pJ82D:BCtka7v Ө⸇N~AE6xdy<"r/0;|!B`0p1y6 PM3rr1TZ')*R ,k4΢2KkBxj7NI0[EΰPaySwn(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$S#:Oz4efb#hQ #_ފH&z!HAd |}p TRi*KsmM+1 P0W YW ].PK$Mj-Kp`zbbq$7&&{Ldrǒ*!;[9@M:C{SNۈ?/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9wǑ}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%\ G5ē={p)vDa;|/֦ I<)tKl3GIĨmIEQ«` RPZ(D2G=>l |fͨ3|'_iMcĚ$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:" ء:?/GSZ;m#Nvj{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qbAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m 1ZK(O5dc}QQufCdX($0j(HX_$ ޞ22ݡjR:g?m@ڤB^dh NSHֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Up8bU ߷Q뽃J޸8iD WPFn'&&$"THlw`JS[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2arH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{B.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+޹na4p9/B@Dvܫs;/f֚Znϻ-|X!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKO?mdI/J7&L8dF)q[x9~IJMD13.v}UKwBUX (=JB**,~S"cUT+@ XmRcUlwAN0~ƪQ;ȋwOM9zHcREg7OHaQ"^7w,RGKmXx!_3?Z5'Gk)Xb)@NՌ1VfM>ރ?VU4Ǵ dPP<i~C uuU![ Q`2dz?10͓jBxyR.~اyt=N38y锔ay))))?9ub~hQi|o0^_>u=`kmәeq$ /+4P䍀ϼeqe9CxY\1=4 ?ܳ6gt 8nrrI2&6 wLBzW4k-ϊjv"+FLG8>7dX+fX\ݡZj%Ĉ&Áy:x'+]ֿ{<馜E&{q&er.17w# /H'ga_~~S)XC껮O!>83!> s] >mpCst쀙R]8 sGh|5]k_j-x*75FC!Crz+|6\i__X {DhfNYQZP0E5d GlzӾM״K,K #˵uX}O΀qJ7]K7CH//Q9X"&4FGc$ߵȰՅ"wGBѨL/'h]5Xr MY^ʤPn^=^DJ2ceuN~/e.3Z***x=]?j-/Q(~f3# qiy`xuӲlmxg8Աy Hf ߳Flg(p.8d(.G-;" oegƳ JiyU<YibCa`Z`y ;=Wdt1Ѵ kZ%B)ϳy " H'\52*'9-hULeV >12]EtR-ŤK.HaM\_-*[t2b|=<.lOtc{S˃ԙr0]dM<9xJ G&7NQĭ G2hDZ4.NlUڭ?w {](Ƌo*JG4AwY1wu;j,X$.B0 е`;8>QY B#M,!rx?rS[.@6 ͧy8*ttGF ew T71Q{Mw.X]D 껺 xl=7qPvΣ.Q vߩ6.x.bYc zL+Y]uqDiVV+4q{P!W\3w4Eh) ,AMaYVkk7yHsY!Fy4c#"  61sH`į%gg!VdRH88pv"d s)ww-J>-I*KdJ췊vY8(o`~}'R*`D"%f!#? ADv;8F_Tgߞ @;CX%e {sҤ۩l;*\892UQt+b vt's0kǁr{Շ8>V$ss*4tL)vj-p山b-ѠݕwOUcw\>Rf @]R}q-2 t}l-24'Ǹ5_La""71ʙ3?sXycjE2QQP ڀ-V]JwT*MF:H.V1Iϗ)p gçGJ{*{J>T~ ;j;DVi6\%8H1Lv rG݈z/ȕ ܲ#[iWiA8Fដg"{AeĪ7`ILnq-"A\ץb!$ΐkp97ؘxs*QZ0B׼7P6uïMJ)!a=֦#YS MD Z+ qhYs4z8 b%q~뽈B:6~={݋l{t>=0+R,HOoٲ y*WsE9@b3>Ծ;@2>d^&eiWXs路E`v,'1ɛڋIJ޵{(rm[=2\B Q^;:32wv`yvgU:vUy -t-yl-VZiVTc?LI:GT瑄;QV 暔-~7& %i0=Od1y_K M-Bҳh N\K@&@D%e)("@>aUzxܒG 4U<I#./HH˴=A_7U|늅#ɘ_1b3>HӥAUh}t&o,yģo$[v -RnY NK4ޓp]ݡ#:}=ߦqTUN[>Ĥ0 VU ܣy4TD ΟԪL*ӭ^YU!º)V< b7j*@iMYf5}<7FsʢF,4֝&*քj-Tں%p<"lrpJ$QWMTd7 ,e&teI([ 6r%yYSMֽmP )*LF:V$˷N8X4[B/H$\Jdȴ„k MPV1֊rk۴d>$]?Ja ki?H= g I5M6Óe@vv"ڼ QH櫚fLtU؇.2~.uM]U:nJ8ՆoN כ;5O[U,n6Ky.^Y,zl-M#_=cIV^DX 'CCuұV6WgWv>Q#`^!u fe1B(t79]_ƛa}qgXjYf:0T&[]32$XNu! Cuq΍8VW_rl1\^ʣCZ7[OAtU$)iSB( `ɖq{|hvO[*MY6oa"ѩ:d$='6e$櫈y]bאd!zhrj޲xՔ D?1uRՃwtn%==yu-A_vtӎ߀RiAR@&>kLKKEwT-!84^=xgEVWH`)˒|X "wi$.8"qg|u壽Zܶ,<81}ⲽ=@e"H+pg& J+"9Tg%m]!j VYfeR|0ʀL%䑣MZI^[+3qLo*͹J2H>3K~5ѻzq#39녭q34XP|>U&"ھ2liߪ4qSG\o2 {7䒸6w$.-鷘J ެ6(dO\ѓt&{w|㧛02i80J:lfq^@6"1\Iڣl(0O{03| F }DuY€g)l9"x6[M{nIAxAw'IhQka=+ws޵8m3Ùsw) 䜯:z'줽&Hf#:R] YVe6e[<޺*D Yz^Tˇi|2pgL6^L`]ԏ"xVcLzvפal3ua?W{L ̹>loM*XPe 'a)<[t Wݢt2u$j?}}DqJT)↠sZhډ"W8uD"mZ Z.䊕E^{}o;c\pWD(^wE{fZ "Nh&.=sBP!OC L@}\B{dEF@JyHN,T+;B28i:Iq{(~ F=Q4%"#8O.;lǸ+0Pr0^DH(oC&f`P|K@WG{fۖiu{d`D?<ХC' pIäDƭ8:kI ~/x$d(ۓd(E>%B>9#K̷$E: %G!b xK s'@>Ѧ1\y7@!$n Oi/,H(n?$A J'λ\G$Ճ?)0St>,ϝQFIrzB_gNz&:R2I љ뚠_ιۜ1&"MU?nip+{0Hױ?pw`tc08VJLP|̾D&y*:1Rѝ5-E T)LBWZg#1 T]}O.1bw%fVcuhmehpFYMhӱ #4p&qЕN~A<8 $6~oyN,_[sE Ć~ց04yP׆ r0t!(*h@P{@P¾ j/t8ޥ"%T x=c 50?}+7v"}J!6G[Pm NA} h.I1A*Aor jab$ @p;4|w7m@_OYFCC&u8aHY(1WղhHD3-F0].J )w,`e\Ͱ$Q;K9#B9͠ϫ5% LS\*%8|b9%8/ɸ,5$ TNdۉ"u0ˮ\j/vF7 ?avRTCn5P+eT2/"Q'y!ŽgHA m@3"HM:mf_fy"66w "O ĥYN`x66cA?ހwI/Qgp*WD]ct5Y;Uy#?8\s j7pCOoo&b*}Y=E0'%˕rr/>ӞρѤeY6+}Q 6T(sTGL@ɝ}KF5Tdi2$tZK`%X]]3H6E5'|VE!s\5jOg!f+UuG+")d̈#p:*A"w٭W sm>[ửLu̮c.AZ9qxzD)ݠA/ts<6lrc߇J -3Jd$&JxhҮUURF}D} RR0Wd&'>)t:Ʊp?]?ȉ};HGq/iv=Nceul/)UT%%OA`e;u`DtFЄJ# }dBz)5 3\-k˫ Y|%j.eDU>OΡR̯f㊌@~%-5XW=^W4d_jj@ʻW]:16b!r@WZkU F.:&* &f6Y΀#ԇ U^c }vjq'4d.sCO>MӢ`wҙ/+MA֎ENe82XKP)no4Ot+'鸈= f/r`}' :O IN~rhu ]w[mCq&(u ^6 ?,!g Se"xңe0TngԷ de{y#.h ,~\[Y>b}aI{4ŃeT0?$?U}y蚦)\[q_oExv7[B5%<f32Ŗ֐Iln3niM#\&vDq<T]J}ڢDbp|9K.tk4-XaC&1jՍJ; Mcȁ;qL,kGq4Ea˦P0"0< jYwYgVmvY(l,0dۋ`.kg=7-9f0QvC4X:K"D04Ro,K;#F NH Sq=Aif+8NC@cU\Wv-iCS$|/ x=bV*q (mw}Jmu#/AN+-oW=Ff7u644< nz.nTS;ccet*|bI)߁PgPgB uFHBuW uw ݝPi$݁PoPoB FHBW w ߝPi$߁``B FhHB*Fx$bBUB w'4|# 7KsthRB_j<-nDcusη3T3gyJp2qWE_|~Y,$(8{;r$&qV\???TY'e^eV| S[v!ǗeKe Lv 1:¥?`v5m4<MbKu:)FdɜbPEKEIg֞U;TJP IdgP)G-(Ng״߽;>CxaEv{ ݙ%O˚r2¥ 6ֺ<rl/&0uSXi& jeIꇇ6:u}ݳ^NEd8/r0g͊T>uY+.&@2[@[+h/]fWYOl~YpJĴ  ;ne1dxy3<|Uf8/~Ux W' I¿L`qK0A˯8z^ t:ΧY2+UAڞCM!'CxnP^`;`csq= V6,)Lmx3)Gp~L2p> ^ hugs53 "J-CS*-)d&×`&8ޑ+ 9Ym}UV([zJa\W}*}}X(dtڷ("ʇ/kq\zeREqi>2:󩭘zHb'<:BIY3 K'k!`Ƭ^&RTf8MZ #5 +|*5/v&K͓fgQ6fqSmjAM_Z @@Y ~tW\-:Agh[7'T?W"'Ar*OvRFn.)ˣó#i-FP/TħV@5?>Ji]VLR&!V;4i}46TUYJhQV*i~7i얳Ca}{2 UG$c[bDSx\HNr9I_uޮB]jueYϿ]ŷ0-sǘd `{7>"]ay^eC벎}Ct\SaB${q׷mc_PB)e!?ZNտU|;^cT5l4~ɧio903H%~yHr/Ha,=Wָb|$:[L@JTfXXB#jN8Y==:tkIB!Fδ$/ %M^\-96`\m([8uvL+8'u"O |ӺcB>d;\{IyNJeGz6Zd<0;*fQl9{^iysW%~8ty*JKj<2%lnZxPw?}(J"rAiID2Y[6;"bʓu<ٗF<]-rNq*NRz(մyĨEEYdۊdMӭ )EEƋdFqeE[?~/ۊa7yiyxzd3GMJlFއwy׶Hde}+1X]hhR8<19-:|9[Q'Oڱ~TtYuv$xŇ|H [*&Pɗ=x[ﭭ^}¢hO5ysE/^߇xX񫍕51)c(c<͒P$]XvgQk=e0t0ᵥXWZ8-͒P$hJc;tKZr>qeitWm:'h R$9|8L<>vY@&SՐ09=Xбh_z%lbr$Af54Mg3Ԏvv)^ÐER,LsiʍD)Y`.m.|j93IΌz?X8[@ڂ)@/!$߃c%HͱF7zBQR54xA;<|a,є+"-78 K<],AK4Ԕ8Q ʐ6V]?Debw V5d$:xꌲ EBqhE(.ȏzgYpTsoB6J;BT(4tCJ/eMH9A -l ?HHIu1(bebNmoki6`hU,(I1\<$%%?jg\o} Y8m05&izHÑ"鲚W #&;k 8 yp!% W5” E4aς䱧5+ֳ89QQY/APsFk1r @fb T=#qQ;ԘB;-gnt}q[#"qU.R] ?^,f=ڍ͚ ..1*c%(61 kup{x1{ƃlNyɪBs{Z'3p3Z'/aMNtщSzț ݘ9uAEXK Ee mfqr/Z.`M yZ&A ^8m)LÑ"2so,8|)$*͒N.@鸇Do.DcHzNXvDHjU?a[?[3# C'үm\S4?+s1gU0y~#/?W-+:}&𙋝xrB(- Ϊ@걃٦f-DO\wo?Rб4xY\0>u 7%xch PƦJayaar)K5ReXtM*^s԰ i :~5lhwMaJK{fS7kgHgn9Lnv9B+7Π`S3N,Et=[)m>uW#SaeXc<˓ehAU="57tU8s6ۧTE0/x~gwtf%JYpSR>koi'/~& ;=yb$;a9A_^$"P+ZUg WAZכG<I},83=;c: \5km^|078dR Tk ARXS :ѩC$8>6Qf{d€曓1f BB Z*QI~ܲ6gQmv;ֲU,.R*orvIldwf벶~Z]MTceconχ$S4vr<{uśk1|H^,{~|fQO˅mN5":q  Ћk5fD\ A44'qι uyҋ"<ř>R(_4 vwFZېBuAEw'Gkm9E>(u J|[5,8N/d D"CNY/Ȇh)`4(fģ\5$'E?%S=_Im/ൺJ{TsbGhM-7i0p+\ʾ?C/1x>+:lykoe,<\fn2Ca@JP?jl(gmDO*4 >&ҋV(ܢ 5A_ςBDWzP7w LN4$|u,cnl7%ԥ/ ¡Amgu[rh6 5 4rm⧽ZӋ$Am*|INK˷!M@#=>_k,^ٮeKS}cwy's-+eh ௕ԍYi{Ixl' l0aQϜa`;^U x:8+u˶-cF"8k>򰁼yHa? v{9q1F19akSP˙1ɃOkS\< %uOX<6pTHCHޫ$쿈4NOw?Q?~_|%[4×1nw /m3R-)'Cc42 mU=/7]NtjYwǫ}fbm"lbnd$up =q.KD8߅ @Qpa쌚YT:^iY% M;#).XpU1k'i{ \K·4Ҵ: O^c,W$Eq îo"L iE#`RoTx*]IڰB'f|y`qvuv2mJB+°ko qMR% Qas0kYSjn8eAVF EVZQ(d@pYprAJl4׵j +TÃy.Ǐ8oXp\W\[x%w"8ł\P\vXpP2lɮ&W׮pXLRR8|<Xps ty4 E-JF0Hzov?ZZt :p̒XPv}Ǒ#n4+>p7fxMtM N{ZΡ˞d _yWe}? ==tQ/Lhow͘tҵ/~?mb `$0s{s\ALǡW0 L$< y2*<$f21O @BZ@#SpY&5l/ɆN픯u'0w,m_Gbe"<q1oLGߋ7uU͗^FKa="_]2[@A@s{{ ^?,\v\QD5FzGB\^B> In0( p?a-f\R^p?-;k~va:Fg7Y=h`9˗a 饁GQ;! Jlc,Xp cB)$p/d>@ƙ]EN/2z9OC U-RS@|C ?l=kgV»?s1lj4htW@(qch!?IG?7B[?6-{ӟ\>J=͝Mo1ߖ~D;Fɇi>=uḠ7o*P/ h Yqrf<%?흫f;= ^] Fsʞ`ӠUi w:q=P8X'$Tu1"]EȢ^00»_qFs,ʄ-.y= GlaL p7B_, hph=zb6z)*J>T&zt4 Yagu^`5 E x)DJX]ܙfUxU`7%LLF!nyN/=3_/? `瑖RZ*O,J PW@zvy]=]yV$H4%,9s&aacwgNA߁xډ1Ʈz97q!Y1oǕ|DYq:>".gΏw9.f}SqіUĩ2*,Xh+*'d q%"(dmЎdbW* {Q~|~IJ}2l8>Wgx\䮔* <f9 їCiYdg-⟫ߞǐN*O|C^xT#gBj95ariKLt˴ h;T- ITr1Xs[pb-tRveDi)$5d]H,[:ƈITEe=f3-E˦6@*nPIV6^*ָE@c $aoޫE$-7[A6P$ۤ)h|ieTEE:@]ܠ0m7( ,17W1esh7l߮3"LqJZ0dZGZp2Kc&1'uaҰHk;۰jUP$$JKB$}o>dm),eu6|vX`o`c}ys[yIi8߅trѸ^i_Q :`^רziMW*kbQ@-@NC=~g&b{W*DLC`J6-DN/bW, ܝSS,PƝ3R:s r L̛eSREoCoZ 1E+)F nZ2ρ gl>9S'J esP)NM,{MɺVNɎRa};E[Zl0Jsϛl.XPcM6c| ]8c) +ý:,ܡp+uD{ɨ0Zn廞Ͷps04PgXKn68AˌudL|َd]ɾftcsܡm'e3*KyJc Ұ Ee3 d`=2nC34}'ܔGP{;z_m+ʐ;槐z`,:ٛ݌] òRyL` lع~a?&1/ك@Y9$;0Z>KZ+"9ä8~^o֜Q7^s_:@୩rliz@-CusӮN"D6-?UzËcUF]~.,t=ťϷ7BpԺ~nݟ't4.;~APz뜏h&Śd jLdjq&A;JU<1F{3[e+tM|/QQqv4!er`kRYUzisWs18:W!Wlق묵Yf3ʄ2ߘQ&&V/M')fafJtcήQ-ή93$)nی&nɰzC(H=C6ժC67/=pl.*LR ,f™>NQc 2͌zcXZ2n AIC蜱&s t ~0FqLa OÍk?K _p"G}8b@˸,}Η$o4u\`|}ϯGYW<_Ʃ$WI%_D!?ZnƑ&boݛQ` YQ<ǎsW3a\8F\{ƽYvDŽD4T䠡_F1ߓŪJѝ'5o% AD r=M/?6ͼ'3X}`rڃ)" 5s5wyӟ~ȋmlJ8Pv)ӕ{U` i luUQw ҳg.C.grUgsMŏWUTtO,ښη7_b ʽ4NhfK#\ͺiZh2~0OX&@֍k L.5iM%kZhjoOAHYx+moGe;6gQZʃ[ԋn}dzib4:*|~˙~6u>ҍ[78yAX7H%D5H%l_")Ro11Ex@:b ~}ioT K~*s>8~4sTtTTY |O˩ML2?$ J@$7}3P* [7B#^b։Y+lj~|i+SF%VòP(I0\?bIy.cAP~pcF~/7gBRM )"J8s llfꕱg`N^ hyl_uP*eڊLEn S"[c.riket\}*5agJ h؃ƠV cE dU*lJ #0(#VTe 3,HA[N:S9Bs)a9ZPJuӌ*Tp& ETg!BdX 3 3JvbM.;X1$)ZXMN1aڃ*zQJ0 7!fLFd&;0肬6TM(bEMLN<ء8ltS-:nmɹ9R̼S0oR*1Ψ`x5SaqԩBTH1q ,CJ6Tcˬ;uD04.)-m٦(V;mk+ne ޓ!ICdlL x SbSRqf;m޿:MEKr6ubL@Mh$XğhӤ=^]L+iZi\{dR K"K"Qec0}}SCW3n3E &T3VՌq +4W D{R͘`?X۰ԂK9B(9"riU BEOjے#P—)6Vb'TH \HiʽP@ Rӽ Vhg֟چL*>z`;r%m?%'=vR8AQee"SN+椄jL)E1"9"]ωz j1ādYB[1C0UzR HI3bM-@Wm?n8塠Mִ=3Ik)v9$0L{"5Sɔ7VZC< 'PBJHI'bּ}j*̛}`|:L&N%!8"`Na,%u;d,qY ͤ&Zj(݌oBHδj2b-bR0 D(Ue$&*ɤ6@WLuub- :Dluc":vN5:G E+63k|J260ɱi)%R#p8 eZ(bKyz%J;dNJ 1&x?.! ,:)Eu^u5WBׂ.Ԗ فt]"82wK0鋼&N]f</j,gC g+eӄpc#sPnO ϕc:|_0󷿕?_ΞOaⵣ+ZtًuG`;C;[Fgُ>Y~}oMgy~)f^eQ??]gS<*: 0 Y7c4Qa[*3aE+x_ (BAgsFV2/@7.$ zY7|| a ʴJUm0>ǠHdLb#}Hibi&SQ-';b#>Q=hM/kRjiS)ÔH#S{cx huϸ􊼚w8DKC2u-z#I0;4.'bƱ;7[Ju]LZMRb$&.!9uJ[jh? #WRE/M57u#&zbb'BJ+%6CIq(nf\M2PQdD#,\@ F):d\BJjI[ ΔޑDc*/˜2. +L*YFl*#8H K$Đ5FkO4 z_ n?{%IFGTk,{#I'p`ңuRnP}?J7"9X'wgl-;0wXp pZ0jѝդ;Ur'C[{ՄHȄsAhD+Вgj1$,LHTi}3!H3RTƱI3`,%g]W~GXg Y3$s\r5vPϵRD;RQzDDKbEfSHGZo:!dK 4 ʲOQ*Z p &!8&C0 AuʝtRYT̸Y Zs%hWVd[.4GG&VB/VӍJjcq$=/:ѹ D3Fh)7P Դ8anHD!5jnPTݒk&5A+7@ WXW-nׄ‡\jɹ+fGCB.JuW냩p6n n%Z};6EMVNtSEDZ,21U vow` 5)>̯`u:Fp8AD+lsVfkY:&0bpkXv}\՟')ݧWKysCE[ vKμd kZ#nRFj?V`UoK:IhHV98'!`!gZ;}]A+k4cNKH>+6Tݽw(_]\e8ye0}jQ;?4v:ibT@&b];Uhվ@BzCA%r,3Lv$\/"OL2&vkf"|D$dpBhiCiLBzNԐsc5ypKcP+^ۨ;]#̇S !5Op4dا-jbH#utw1O_djTQڼ-pЦVBs\۟odӌ1"4> Ņ {ǫxxG-^!4)Dvv P~*rP Ȍ4=ػBdDjj%U6pAsjg"TA?K3GmYkAʩPj*i[YcWS}8`Zj2͗-,'OOŽ4y4˪Bcͣ<썄+DѶbi:A3y-g:[Z3ntFVN; > ~ hJ,?sT76jpUnzQ\ %Oܟ%6A<--kۨj1K^=y<*\2 W 湝=q0%0*`f/rIGc$qFEi%gU6 3rl}n⇖#{mf6 2 $fD03SJ-bm,&Hk0&[xŎ˪{X-XUSc,Ǹmؗg)np0bsDBIbISmO1B?w||nhj]6%jGl[3vJuqo-!:zN|A(~A&kgA$6E~VP)<ট'X Z$xrSWcZ/\{̯q<?w~هKK;-[ r7B+-@(>W(| '$oA-(b6-o_ή|~^J"\T(Bw"+twNwm?q\ʄPqDX3bTZa@;GAw}7F@폢V$XC|'&Snl(gU$S,BXi\mxM:V[ĥh]kQ4C4`L ' 2 @덯\ ޖ\k)5ԅ2`Sq$DpnoKdŎF9kӹE9!H%:0bC,MƔXZkyBz/C[Br,F8`;@Frvf 0347Z_2׻- :@n$#F".l̠ Ouzl!h>c/em N"X",*pVxs-W!15<"9Q F@_!s;|JV_Vq=#~6[oϑ-*6['`~9qP㰽LLfӀg^k kZ2/_eA>Õ֔#Xm~)<br6N8SRD)gM$29Ox$pF}⩟$ pP x.F1`+MQ&4JV,F~%{}ٶ"%C]OtQrgDQX]ҁ*V1@^$lm~ ᥾wZ1!;,l/:bNZ`Ƒf8MM]mi݀,rvRtfCe`/8YTy85}7UFI4сy231*CF Ky4cmP̎+"!>ehvB|GQ,mł; _z ,5-q\<'xӜ?Ծߖ9n%nU~ka?|?? w]g |L@: l'< Fx(.=1:ޖs{[aiiN_r],> *%*%,7j7w^K'm܁N!l8K+I5憑*hK$U8Vxʋ,nCG+RXPGDsy <bͶz&{SmUʇ8?s ̈в{ <=~߲E(OϮ@Kt (^#Q0Va\GKwB$HvW}@hDA&ZmLaUoP[M>iZץ~a ^RۨW8Eb%0 .>-pq(/8(KY>`=&7.f֥}4IeDbܦ2>f9FeeG(9A8R67}nR|0J+jU2v XH}uOoE|@ *Q afsv7$ ww s vXpNcs6Y3a\?M6/:ٌy͒ Z .Ln?4v9^pKP=Ez'^blr-r2osk@r9WsBՖ=B[pEY#᜶otپ 1rfvV-3΢/85Erz :`!Zg;ͭ~~,σezIQr Y`tMi _RĞ#/8?/]PU db7ѐ[ev[t ne,fg;(6^e*SG4عiТIW)]eXg Dhۀ(KY>^:yKӃF-ҶYx#8HS%Q$TZo*K;W4 \;΍r4}{uyw-Rg mjlZѫ1}~$i eU@U08)pƎ`n'ߺ P2qE5%GU=? Wk o}6FR qp`ajqCq؋!9l=c;pA. RV[;cL@;h|vn4_$Xzary,UQC̊xp";yO@)Qr{@#1J8.l"^8d&,Vx50ŀ_{o2>sT&nc~[C8DPeO\8;ZwRu#SFe2-渼Y*+mԣnH\x"Jp"n!Ɗd0 Q|?iF_o1Qc}-)FgS f  ̛w`,)8R¤eWDvznG Ѿ /湿R` > zǡAcGJߨFզ>*|^kBq1 Y庪À^[ u|`~\}Wسc+~[]If $3 [y;kdnMceEL5f;`7:3k4r킸÷\cx8LZm--0XmeꉭH_]=(^#QpD kTP&xP_#1pĶd"!J\yzN^썼P]:~U oaj:ى  Kd-XgVX(0Obpnd}mWjM`O'fXq~]&xnW5D:i~eSco6?g_'?[\f[Ll&OaQÇɟN橿4#Vbw/J\*:7 ܞ@TT53k&)?_{(Q_W+!=E*CLgH\&ɩL ?6i~X><w9|Si ń<[M,[o?fg6ON_4/֓'E]#Mn|zyI&㧁-0/跲9msߚߞvmy񡪧vYFI>^zھw܍kgM=7ŏ~} y]t{y{s N f\a>2v=_r5}??8]Ш kժ%_ouomBlXs?]@0eN&-8 8>ٕ lns;MYN_5cָxeG ]W޿gܼA.ֿgtU#WM~r ^)s0]?@+OEШ.! [n<!0F#kk6b$SB0 pBlQӣpQn܎ϟn\NI)!EK_QsF7Λ@_rǻ{$q6t(+$Slq'vMlȦHBU>.T^yl.77="r!iRqNa=xkP0xAQ'g !kxkJ%EU ;7WL07`הxd. ܵlL} 2B:h(㨍2s>5H2 l 9()z- ॼgnƏe|wX6qZ;+8C62ǾX%ՔTހ+&yMb\ LҦ4vi|>fيxxp O >uDt!HOtLWt>zgu.=kN^:i@wIs)x3ʊX_(&.DUL+}]¢):i v2md4"-xЂH<^0#Q;bH ծԓzᵁ_aA;\aoxmqQp;I;*gfLnzxgzܩgύ 5Ed7 ]qj1_ Ty8g5Myt'K F(cҴ ҁshOqV3$uJ pԡC?KlߒVo,-G制$H'"0*dN+b0Z(/z_/!wzBTc..Ix4$&$OuZK)uiḃx̫bJ5*3L8 Q8#I`d"|vږ >kA5!B֮YF&|R|+zOr┉tGo#mo'&N8іPN$.a~X3&Hf&wDq#,ߏ%Ef.J,XUG"b[HH&S|60F?~锫B #]Vi 6k=#=@$Q;GL0sfgpzp7s*t'ChtvvQsJbrz7oOԣ) ʞVQƗ7ZP\w|VcA1IS!%ň H#-7\ڜ11htVx*ȁT!&~٤b5ls?Y(|0e)/w "x)tyAM4_S?zy6YXrg݅3x6P1CH[_avC=kwΨKpjɋoB"woÒ1m=z kO+ɳ__g=cz!zRͳ2<6Xiݠg/m¯҄b33wdyp.W+QkiΝ0uA1J"`о#]FA˯u9g/Zo/z1AA'DߜB~'.ϿҾo&bXZ&]XlCmJI S"!&OR,Sda?^Xw++V͟W_~?X6dҿ((Z)P{~ay%yߪWl:C~]ݬޤ8ٿ) 7j,2oߣQg)dh:)xFudPr0F$7wK rP0FpʩLMly?O9YLj}SmʫsjBἎt!93r!8.CptS~z)_8_OyVl9 iڧ1RLs:LޚLHu5:ȿ>Ln}%-%~jtSqWiMj)6u Pdк`6gıdP (Ojg adr Җpn07XV\w k,έu|]3LkDWcRq.`& :H’H9ڎ#h]5O0lrV>\㠾} "qlYfnd akc)M/'Sϭh@A"ޅT;۹}p߻ zu{ ݻ C]Jz>!MۓS̗`6k'_C9ڐ1 G 7Qϧ]&ٱCZ8֙SrDHnBkPrE %) d%x~ʓX˭MVP:[i f*#DzWcD6g)ԙOfLzm j;O;jas\!T]jOF{R8A$D艒p4I< #Db 4A>uѯ;Ǹվs1{T`['˅x tД2OH>($!#vxB3 ]̏"sQ%=i|{0OM SE -X}kcV GGto7S ߥj,3(fGxTq=f?Cy0PikBI}Bˈ2z4D֚H2MPB$y[r,hYALxx0W_ jX+3d~FTxPMy6YQrgݙX|~wk8X4ny l*rLӬz*5q1?nLG{TcяR NCmCؑ$0N^@prc\48U|w%3.hopCpbr<ӯ\ w6xEKn⾚antfddaKpAY`0}$n IqiuTIJ֑8=aFK9@NفC oQu^KH1rg~EnscH]8nnh[R~|f߻6 vK(7iQ/jiiGp.3$L4)4S҆0lfigT`NRb пf:soI]9o8B7/< ?J0 U_<~'BKBɿۧf*p??Ox<߆LCjc@bػ++Ln';v\ZL\5&M߂ۉ-Ϯ^9~QBt>d6-"4Yڮ8͠SGV@v7AѸ=m~ GTX 7;wٯ/K{ w?p4B^$9BhkFw@GqO hkhW-g&scdyCmipEcDbo ŒqGD)b8X`#Df?_-@S2 0t6d ebd.|r&$ =Go;+&^4_$"ꫠXN_ݠj[n?COAyĊ g<Hbj>(CWQ YD7FgWS{$zq*zMl5b]4.qn(bfO*BFḺڳ-+>/ ԗ]ȵ&i2&!^RBгWnQ!膠zT=iTe~H@w*31AℯP:a"Z0to:s}.!E 8CpbCWihf/s~  SY?Oo?Wl/7?ܻI;SS7-Bb;ywMs8k^L9~v>|Or-?>^187?gQm[#Bu7YWnnVNj@#.7ӼK\(_`vgSOXl;57؝mӧ$e1+gTz^>_xJڮ_~~>Jwo_r>&u"ޮc33ɟ:Ojޯ6{d^M[܆3:v't_U+j?*/GO8)@f*[N6Ao.{%$.[b(zmUNB#lL-FU ^sy F`1`PX ^1Ml9 aLc(u 8MyBq;EY10  ft-g?QaZJ(Ӥt`nNQ NC 猍*8D&5-O` Z!~:EzD#-E"69?w,=]mo8+Fpa4I"9=̇ (JQl$9ߢVRHCUbU^P/RQ/Q:_Y~!a?h?m52vid\llۭb|H8|%6Q2ZQ_r3:y׸zirpf~羛Ѕ%ZZK2 VT.ރ32aDQE*1{IR,wzZ+,#s GV5ɕ`EռեpJg4qG-yt$(ٯYmZ˱Ek$"xc9FrؐjZAс7|4 C WўN<02a!zF7߫~̨24Ҷ_{Bfo*iRYRWgC۴jjM#GwtG↏S:<4ݢ^ͨ.uZSpC+2fiTA4wb@>R^UkfPRj2 RIݻZJٖc|+0sƓ[v8+<(_1И;@㆏NM[?xmr c!h7?{h- mu?=2]QǢv+N' =7 Sz-pV:|̓_Yg*h|3:yOӯq \gNU<UxU|o7-h{lS|w71z|ZK?>3ֵ2p^["r0tI myO]jXe:i%rTsS)jXȍl㍓UPay P"N[7t~~?ʈ!_G9Fx*I q>jo wu!W$w#^>}FH@9Y!0 KUF԰u {x+|QGbz˙>_G!"s zWq rsF+[;O6!˧OK 5}^ Q32if`}^[x).k'1FI βPN.9YKl5oYGe!" nmYs7WDV/~*%(MNYXZ~9^.Y< ͫ3;jhbm578oaNN?rZ]cZߛ9Z_~z}5 ޼˔21Vko|T m.$T89}xB.D.AnjnuԜ<{(7i?*e˻s÷8K<, :GɶAߠd3TD6ղzp/)X}X5vuPJ{7Rm. 5=# : !`cfmT2r~d\xIs]#a*I'>Pa"!xcwlf Q H6t`vsZJȺ͑XEiVjJKU<ߛQ/!7`P'3RF}vw-2Xt ,鉲ӈE+ vL>.$9a0@8o`J^CU[\ܑ pXXΑ.-" ^rm(qGdp d*Yy΃/lqs;``J ;5 @9"4G4@ND]6&@Hk [%OĐlyGn25#yPf e:B[2-Řh&h-2 n+| TEL[_JN,3Y)ʥ9^T࠵[. K9oh\j@Y\Jo`(ZZ"ػ` uqmyG7`E̶*fGdp x #ÇD|VOt高&E6*{u-6 qɰK ޳(}5^w$\NC72\x2ya2YuT-xقԎY)mu>M&i~`1 a,X E-J0 m;#28yٔʶ!i kQP`EF7l S ޒf=~YEV[7 { GF) Ⰾ )uAdXXat\ NF[xU(B$2ZʖGdpO~,J8;Ǔf*!%/)+WZSѪƊd3!oF {2ݛ_0:Z8V~Y-*:%/WAzxDnj%]UPZwGs_#wе0e $%':D8}rYeMVqb.rL*jo O|r'Y>rc%E΍PIGy}X< Ňc= & wYt(y߸8!ecz/G#y OHh%3÷V*`|=<" _ rL#p"{:"dҌ#(okjz!۲0Y怣Ȕ")5h܇ rI FHyqv0TXqR.JHGe. tr<\QeMCΙUPe&5<>W<"+ [z/mP ~[sjbۡtVX9QFӚqz\.4^BiVHeEHauH0@Zd`'kz_.jaҺLR}d/TQT^L^0k(Yk4Z?f@.#282|eVr8QtjmѰo"X A iqd) 1 ܒԄ%?Y—~hrh"30ҰKD|V5Pay=[|LAQoc\6֘,&n7 OduꪗpV dlk^u Wo>d1&WO?"oz: ? V= 5B-0/XY]&j_%p/nZ 2ko<pd ox{gv17={:|8vRDO?jڵ?.V7Ӻ%v.E5Z_яyOUWz#lR5U#Vym z* _,N5iH[^T?*;7 +bAY݉E37WPppU/l[awKxןO/G 몶 -85EqLVsܬ۲rKWS7~Y=_ߝeSXҫ ~AL\7dM^״-j}s3NyN(#.CfeϔA)hc']?Nu~󆁫d5o}&roS+h:6uMu:jOӽ_/Ο8Η?V7LJ30XXWZBʘZu.,9-YP\Ѭ~ڿ~˻M xMgQOj.*<:]ق?]]4z8h-.ĩa,+|hT@:}KgY:JLP0Cض cF]%`N&NTۥ.Z)Yd2J{z1 trDp`϶&Ț<mv bwR=ɓB#`hʼRl?ZG{7 J}mYX2$a1b!Q>C @Җn=Pз{9!e2NdjD(9 37[fZRM._߿{f]BP[?ElؿJ7L"XMbfbJT r NWP|3ױou7:89xujQ-;k_(8j pGk{rLmߗUKt 2 'kERֱpFYor !¸snq34Qu%lY\ʙgyFư޹ u=è#"Jg1!Tx1J3Au=*09Lڞ]ss\;4P(&?D"\8g\Ǹt`W,,8g6CY geoi9 ^7>Ǟ@a0Zn#uCP{9>_'}>.r Y7:'Yηzhog\.y\ V2vI޸;:]s4]v1cH2d˞GI(A},%eXeЅ eTLMX֍0W|sm.Wrsg;̖$@ %MXzIN㜝+Jtr$zW]:]c"Sjfx31]E> {ևf~d ΛTʫ.p6%VI{WWqK8 )cZͳ¿cD# M;x9mοr:Mubq=HaE-p#cđMB$|Cq].*GO] `S-"=.(}$x{@,Kq~Mrr@Z sL"NWϨ ֑#&Ϙe]FJefl&6go}s֝{sSVQ|g;Fv'D>\6` GU0 MT4*N0Tqiah2zf.!mY`N0PL%E= OX82i3+W؊3+@l♕D]}fU3+Zw{WƽjFļηC^9if,/6̈158$ 9Tgx5Q-Ku،AFu ˿ [<@ /_!.ΰiLhOu `-࿇#56 \cu (pø#=[]P;:)qxBVA)<ĕdL݉u'ƅjW`x}t(Lk þ2՗PX+dRƙ:pB'a6v(Geqa(1V:1bDxS!p,xqY[ܛygwFC+:k{3^mkxZϪ[sfP(fEw%v2s{j9w ]lw8]w}a/lPu0D xQ$POvUYۮ3c]yA~Ft@Zh`*7;zEeaC%\C #L^J$6?nsvk$'* 9u(N!)@>E:୞mEf.:nxW+ 0$Rp0rF,B$b?F$!jSwX3hb)Mכ$dt/nyYX*9k.l:!,纁Ǽr"f 4??C΀$6焄Zb"Hlc>G.,N@HR1g`ǜOA|X ҏ ůl:~}ȯSKG! 8 |h*V>?他mhniX֨Cm*ȀtuW%ꎝ;vuGѡ^%Ĥ LzUsn? (br>TԀ ^tHEmt8HI4f|֗Aw~`; aa  +$(`#Nv@ׅ(̛.bwռy6GWۃp L*Vu51?;x,0'cHy>#ϳUcSTV06(c ]iq~o39\¨r֊W*2Y[C.ǥPv]rVqW/\ݵ QTtVqgNs2|2l+5;A# Á.mBGݟYWBecW"ح|IGQ´㮈u .L1X_hF|dPV˘;FqՆ@FmLLm`BNHNɽy6cAz(x T?-{o7btgb9#h> g1|*u4\^6 E A)ce(pX\"Ebfy|-}Q]2P:%lm)]8PV^ !A"f?β 2q8ޙ8A. yW "@69(o6ḻTsGŨA˔񛷂u206NyQ A 28rGġ5%eh'@@C d%@/D58Z& mLT(} p- d`{qX HԵHq3tY^:6LԎ0DsͶL m.h k UWie 6ph}mdo %fd㓽ոZ]N~21po(P*LJ-iݱ`ciutZ'aD ے"XZeMZޅ~fpAVic7 s:̭5 \M-!M(nKrfۤߟM5 ȪߙK *>|k\\/p<˜Y1(+!j2^RV -m,*o# }?oW2|ڨdJD`NWx<&߱6[O]n3oӡ WN>L}KX֯{(&6j<]0+5t4½%f^ Y-@`XBGY.+h=P]~R; }8K0QoJBWr&ჅbKg]={Ǐ3heZߗWxU.؅^mۙ_vLl_WZ_~RN%Ki_$Td_\gj cK^A:Vt66+Β48]NbL[AG0A11#4"j3ggCmOi|8;8/'O6S6أ=O( iŬ{O/O.D֓=y{)lZL*`o/WckVjgimBHNd`]L4  cU2)1%_ i:֢k1ײSe#*bQ1S՘ &h{ mSMyVd[W,u:R6l9nߝ|&*8Oݾ(KFE12۹7+wE[v" PmÜ>۷ K"Ettg2X_phó>anN^-㳴@{a 3L=?\Sw1J[х}]٫G/zߜoNtӧ/+V(7%qWxVn~+.ɾMy%zHjkQxodZS{R)goV)O{ŧ74[IrLNTM<]x=` ٷo-/2L.D Pf ΡrK`L_IɅ[I?Q?^g7U \ B^feQ\b4?PH-0if؛"9:3Wb]o_QcM/ pe nƊ.h'p=*dI:gerˬMyeMߔ ~ʜOUJ5+`dw~\xpLD|~~yEJ0>Q><=9cy=N.R08OKX-X0gJgq27.#8]A|Bjێm_ǎ(/vl'8$9aLz؎>~-=8HR:}Ooޟ{ׯ^5 Mpx௃t~^==9~]kSG+;oAl a yvfgȧMȶfb~Ѝ]Ȓ?ꮺy>ιy+w=C2LC>LnvVE;4i5OqKkwPIZG:wd"esYtPuC(?KV=gJ_EA/B6Y`_Y` 鐾|6jG9Jĺ$R,K*et)\")Ǹ9#Vj^]\G1cV'b,)Q0Ie 2SPPVzUE B`ۑEc ͕ ΕBaш,(&!en/^mwd!8 †ADp x,ś ®ް794W0ܕDd qhx0?{W;5G!]z=y|]Zt!;R7*Zn`~`AG.m7XŬau3^iWDب HjVH WRޗؑ/=ml9b2ۂh˄J@/F"# t±>X$oNqѤࢧۯv;9:x;Bӫ2e>=Rd>8l >wy_9h`۽b3b DւZ"f2%Db@Xf$bIᢳEXy"~;|G]@GB@ʅ`I8FX !OpsV4"n7tz;j%_K6_`&bM%G鶚:p>qa *syj ԺڼK~D `Srw>wP7i+ZxVh+r^hcr;R\>}MD{⻈Vɗ<*i~jy6)~Tϐzf-9['?n>y寏ޕr)o%&E.짺p*&P{;+Ihn3mrv)(E!p\75]^_N/]HJ~prYWl"f+5>Qz}$pq6_j1OٟlƋjF۬CD[{g:f n{8?R.U4mj|dӂ6=z]v9Gl5+v6z 6c"k'Ea$e%K9:vkMuxsmIwg7C[ۧ&_ROUW/&UAZe}Wr ^]荖 r7ԮG3L(jưzlAA%2p٩U\~6lšCN mUe\u١otQ[- TspmN9w>2L׿=6EvJkXL@R?j'V7>-ek#ۯX4zM5g[kE4Zьho~PV8.HsSԋ ]A7QɌ#\z>&SVޱ4ّ.ec{]6z>pXS[7SEöjҺ:@UB[4&6gVW˂EÛ^LIOIcg_jAE!%i@պ챷+w,z|8*awϓ`gkdҰ5ҙb@mGPLim;]}O&)P o{hymA3pe΢L}޾X2ޏ@P)g: TjY&RW,3yZZ4Ypc:?jN`2WX"!?|94 CRh*kD&8e$ ix4D%"[J|b0>$bWŲd||7~;_)k!1H}^lnV՗)ӥn**-TK9>֐+w^wlR)ilьsfUEg̵ @R:[tgƊsIE"$̘"f`̢qЍ QBʲ*dMlX41+k=_ݜӜZ̓AKc<$n1v YmbR k <6! 1cUZT ]^Y1_0ʥ:+D +@ aD;O<;dF|"jhEvV%]9(M%xbmTgcqU^*rr(ʈjXF*'6+G+/d ",:|_ xd9ls򔝎<|i_ {ZxoցZjaRXfBDTCJZ$Tk}IIYhg$ǒj (cXx`̬wBiskbħVa Щf9E g ^l.)ay|pF@+'ۄ%S] l@naxLsLvI(e :-5 X ~J|T툃PI۩[&TU " y, 1y6|0L mi4XuVaHJ;W "(:] ['-J:$*`\!bb*B2%!MMȵ54;.^a$H"z]rRs <)f@57.1$fP@Ơ) 5jB1] gGL5w pTD&:ӊT2փ% vaM n3XAfp M%+9dpo(BA3Dp7GRFk`U]qCVaUcܝ.9&kn `2S UAyc%Gj"jNQt),@8@H$UyeH "Ǜ ~̨PΖ 0d3XAF-%KՊYQIUr ;q"N{Dyia`; 6Q&>75TPI^ϷZ56< g0 BP8R{`Vim:"SJzN HID@J W@y . _SaA˜/hNJ# I22+rLڅUsm B Xo < +q/%Xq 2pvd2P?`< ,+a`l2ŨrT0Iva0^m#~~Gȓ13-ȩX,%VZAiz%>8>.-E<$ x—u &@yQ,R"$29p%x@1E<#怇F#TF~D{A ^Np` lH)4 uD6=}BzX+ezZcW0 /1 ~'46()d@'ݥ@B'GЬÀ}CpE0FBJG! =[pO AZ,!6V興B;IbN1@lL<#7H g `BD(!3&N*@G y5H&##k0ta*#U?[q/t IC0ӶM3lWW e/h+@m) J J\7|bٹTCʰ폿E#MFݏռsgG&]*^5/Nu Y$cFCӷ\ߤ3$7]r<$Av>?iL8%$ȗI;PKC;HN <\Wd'\Mr,1,$,҅wmطTwmA?=C\ݓfRgguѡא9=~Iљa 44y%_$6>|0(UdzE]2>}_c|&/ȄI ɫA:oTE-H0?o%L3}{UWmO&yeCu,ky0M'#\lvヾ|ɇ 13 +<ŵwJ/KO{zwZf;7<U?|*%a|c/NC3p}S/*=^T6.d~+'7zn:nuP6ks=rB뷉EmWh܉~ 뽣{@]VYu4bg)2xpI_ɵcNpP9n$x[# j\_ۖaunHqm1:oY5Y&9M5Toz=y>W}*`YV)ޱE~~\`zX[ hK9-?a 2U1"rJ哼>_!6I$;O6qB;2+\~ʍ}5[B.>wFz:jU ,e߯Q~K"ueq|S5E>9+<RDIX5T"Xg2KT~JW= Lul)-ħ0Gp2`b6diq6rqۤ@RT%KGYA ۼM$4t<9$"a-'LxSY|Fw~4H!Uxmigd&QNC%iI-lCOY$FC Ƽ/ƘReU{JzU~+L 4 H9JuQYX8ї@,d0o>fΏ%S<ӏ<$㳕X*HKec &QWSk:z96˞EקPI Bwمҕs,$EABx{;nDlGF.V`3+@Nb;-D>' mR[}@J#0|;a~n`a΃ZR _X9+&ۯU:` TlA8kBKLʭ yV ]a4Xg+]>G \Pc zaS0f0#6R$<\`2?TFdt/xÌ`tP 2<::%%( lj q)#l;aBOwIdOq+q(b!۬!Em2bŎUkۭ\rݹ3:+ ﷚zOa[kN/URQF VI3JS{Zp\9v4Zྠ̀̃a<pd4`մp嘣-xϊ ICP:1Ib CSm'{,x0 o"Q6#ч+ <1&^ ߞmE_ [1OZzmUۢO;Wx1R+J򭛲KLǩH{~TURծ5j{ǵZjNf wl~pqkw{&jg=(Q'a P=̻" #t9% $}Ylp%m/Lm\)Isr\۞{c8]DVpvoJQ]Y=loZ̓L"4듘Bľ#$Wq L# "/`A@q`+Zu9VRo/'֭ ֺ-0}vV1s=.B M"=P&0: .0:îYjzj 1h$a08#ϧf x8.eܵtqWFQR":{CEc;ٶ|xo;wUovj{qYC%̛wV@K p|];>.hHm{J=u^UD? uxC#^\ &lг(+#q@p?ZK_jv['Z/sgy)2jNoNSMo|1g^86D 8eA܇8rr]J̾;DK0`3Ùꪧh$CDx6=a/ 55 W>F`0k:DpاR_" \H-(~8QV{T ۶M-UygkzR9YDwߢ{y6WY6SiwA6Ω[7VLH˽oݷwZLIORvHZmfsuM&@SWpvz- x_M[{曮Rz ~yY|?i]\>ݜ5OQu:a曛OmF{iѵ7>TV$vr3Mg ׽{k 2L9 [~ RDX?"bJkqG^p,r_vR zu{v|>|`n\ⶒ,Bx-^j-|4_u7Pŕϗe o齴W_[pq}0x42h_Sy7cvkރ-=Tiˮ7vYG+B!bLxzFҗ 9u7SXx'qD##\8* ~Z2i.=B5ĘdтneYAKg5LKŵLF!1D],h9XC̤aB mCY,U,7l 5+*N Dح^jZ'C9ϯtҪGj:(U3ΙUYT]@22I+ W0VǜjI{]GB6bBmƬ:ȘD9'*Ċ5v {4CWù^.:.g}\ׂA[&uw}{x(RWNqu8~k!0e@jbs=BK-331X C-X_*hO*`"\P@@,^b1rPS,x`ttP>qŌ6Z("^b 0(g9UnW Ko{X`_F 6vj2Rz~[:4k@Ql{`aZCKҏ83iU$z<6db%>`d0 GSU9@1*ޤG,&b Y&M˴SVg͟#QnЦ|Nd@]4+XP4WL%PwlP`&ڿ)?$˥trXD/@B]^j<`" P q+)f(% a o D`d~A/#f 6a0 ?% RTQ6%OL+Q"'VUrXVᙤc9 "+%*0쌚1 J8 Ls8e^b@G+%u$EHM5ƒI̼X\jEQொ*+9\OQCq(A[ Il,x@HtvX oVUWuD(M23QD-uKj\K<棈1D%勥4ii18G 1wFؚЬ\ רJf=fղ6j ja&A`x .f@ú02ӪdpҫhR0 aR&a1!3pv>-/5(#:Pr Ұp1!ȄO W$2V** C,d4Xh)ЭLx8[ -tHp V''biVPۂh2JRP~b0OMKU'O‚.&jH4} \{# tm^K x1h !]%`B*Feq   &#(ڽX|P/#m6D @;#옡qWC3K:Yʼ]Vڜ׃bF4$kFV(-5*gsȈТ pd*h]~bmTgP#0,1Z(&Ȁj`R:j!vDDhÕ,"nP\1BxгXep!p",U@k6֜f.witP΂0M`Q4 2K7 LYUi-=+k/^ jǤ码D|^)W:K m \&u0X 9Hc^y^ӭ@tCOl6-e5M=P,nj!f=0P Z@RRp:F <5e`q=)g@=7fb7LJś !u>fxsC < %\[<褄KhLwੀz`B#R*|{FOp'r5b5aÚIV!|?-W "D W}\!N.@Na~; ?" Z!V0XةiB b "EiQ5ёAנ`-\cGLІE=+NF"5R&T?RHkPT gF 83Xt^WK#T5%i\4̵~]r5Wἠ%D|7Pٗ9\sPkH&'PX?˙,+ZbVAB?0p8{[XfP 9$abUrD"&t٥#0hz'<$†obɡ±5zTvE/uō٢KqpJOcjϔn,¹MY[dիב H QWIDu߿_xŴ]h%0C etFC:Z|MYlr6_M6뾗gԱ)OV8ZoY>\BcJ씮3{ONFNI]^(=%o.>[%xT^\܌ht<>}Ϸϟ,K=; Pq_OQ="G\|{_S2Ʈd{V%go6珇n}rJ2,1D6}Fr\_cdQlx'ζm+Sř̾8y~q 5t};hʼL,./&EU«q^/l5O16U[x:9?@*ϳ/xaU* |@O`v,e>vp+:r5mRFCɛ8|'˖O1Equcg /9u^ӥo΄o'ⷊ3ulםCUýodr4~]$#Q[2mţ|Rɧ_;9W)^wy5m:{;?9-w66gaYl6f*\e{Kn] r,݅uO. ~UmKW%/af v>-^+,]v7v9y]j/ͷk"eneӕo $n~JU^o2{-Wq1pՍW5B&ؒ?+Y M@h-62auɿ;>y7GO?YN_gG8LF "H/47?Rѫ?0A3?e^֨?9-˰hat!UviJ_{lt58bZ~B6EK8dt *@$T@MTF: LrclSj<vلE}%n_Aۉ5yh+QZ5MBga{>R1BnBf@] JVo0pЃNoچvPsиJCP)9dOxW ?xK1꡴5XKQCynͼ)ncٿT- sq;T]5=Yv &~} 1|V+L?_3FXdKhARG Y*-j!2IMl" HRV@Sk82MG^cA֙yX&G%cɤFJ#Tj!R*_D̋`X% /l7y-hlǹ# b?3R,ljo!ksXDk5ɸωf5 c}*nfW$rIIe,% ƸMkS1M[H@%PrFWdlUr5M @>MMy< ~ f+5OI7" ϩ026">*>Px :x lC @ c#r՚I&ܸ>; ` @WlD ECatGJh E$`& .Z 3GqKe5i͚o!RGShA &p%Z;SB 3y|.$-N4MFgy43`bJ`j QpNulmj##" ~C9uC<<Q5%>0"B@Ψ+V@<15Yf\2)8PM @^y bf6K!``FrAZ K-b+2:K&H-;zALf]muCbT6)v8LA^]j!#Ƥ,eM D@D6cTOlMIAS $@:6 %^q"0kt5W15{#`*R 8 {LdHS'57ypqXl0FVr8 K&b `JBV4sNkgM @, EeDGQ)|Қt|QB]] la8~<>\X[b+]<$:ӐLi܉3X ,}1+K*rkgj,Ukv35xRF>"GPx92 O\Le ˀl1eN4L2.ìWǏ簂o.-L3GpJd۸htvsF,_8f e ["U\  FdӍ$c^YBʚנ@it䝇p"BN X괐\uG@J讪STGIT ~ŏ[m!RMrf5Ajغɛ@Kʼ-J+*>kNjGaЙqꌦVIBնT,o&뻝^X<ALlK34cKO,u ^5I8|yۤ dytui¥P@+Ck+eyKzizTFihPP\2j.]d7Վa}^;y]+w}EDEd9.5WĪ&]ɪkrWu.)ͮku\*2hPi\jF[-T2Uw+5.=ܙ!» c% [yNɷW,B ߾Dԭ0y& `wuׂ4climKͰ >ܼ1rnB+ձѪBp"سfk!ԍ<VmY#"Η6A_a`x_}]n-ɏyM,،N螪-(c$'y!;O[ɖ%k;O^T+N1(:k,yUi;FJ,mGٵ2\g|:mO44HŕlrqMķiOնքS3Ԩ3lXlqGkWXfceuJV\4y.&PJL`ZH`dMs;5u.F=qEb{!4א IZT/wmaCl0bTz1ކ_(S3JO ='kX|GO^)4ȘQΓ9B /2TqlwsAɓnw_Npϕ$S.NSOM cqܚ8݌ݖ0[ا0g]ؿa,SFDscEk&٫¿ e^}R%#yOp6T^ 6<#~=F~@W [4>^Vh`"sT>3,+D\Z^ +2Yy ]Vbw+ƯYkfy;ijݳU^-璡-p^ ]%U˔d(ق6$[`zECO4قM?d r]l1~5GsplJ*# 7`M>LSViYbulVN68wmy568cJ#ۡPRN2e$(O` `D=@zAG:&!jf,v43.qÌOSl*ZYn rΔ ^Xɬ백n<yMQUq@҅^%v.ރG Nu$ :]J%4aOyU  5k1k77ksśoLj2ؼ!Kr֔dQW-W;tc-΄!g2+e?z>~<]K#cE 2tvUZdu&})EEo >$M.[z[oqv$yD0,0Ӣ.V8W¤("xf*"SzH~Elzt{؉2TJ=TNA@JAce1htڃOHuSʖS svn|}IܻrzuDKxm%;Jyk?YtnǿCVJaHȣdtv6POqX;7G涛 Pf+pɳݤם #Vk $~8|- Lt{{$7.ÈOy}~e >_MU~| RՇAyi~?18=\0KP۸ߒ_2MB`5G]G(q]mB8XIgXnƹw=i'JBïABɣ L?=zsFpR(cGt˓{t0:J6fQg?D0.7;~H?ĝ7k˓/P|R[u&p[N*oZC,7o?6'4+|F*k8\ooBoLl ⷶ1{)l\%?W G5)$|dBυW $˜,갬QÀ6u6h2C/젓Lz,wqـwlYl-/yAM3OZC7@O`l%tᬜe*S#j?\on77!#Z=Ѐ#}'_OK0X&: :9]dE0Js| >>xv"ncASQs[03DZR"tK>u1^_Vz*#?Y٫uHxٟJrm$?x. V[;ΓnYc哗+ ڣ>>BwVA7٣NvW^ZVþ,a_So,*t}I=:fz_419xwQ{7Z|hvO;hx0~kU~Ig ^o/͝\߿O0˛nb} e[ꉧm:r9E:gbZӬP^xtU Qg(eny|w|$_wvrr"N`Ou< H{vyXwqN9F'tq/g/:ɿͧs,_fQis<=b4{4v؃0vI; 3[Oza +_=^̦_ǘs؃y(-w]zzFzQ&kw?v'nqϊ*E W^0܅vPxAC;G΃෿-i 񇗃(l5IgOޫ~tE enjЉx-lֱߍM>L)_hP3]0>oJKC0,c!&Ѝ+ϟms^uI;0i?KwmR._NCI~SR e -78᯵Gy<5G:|X\h9(8ՂvbISlU쩻N '.hfHpx$[ʭ3\ p1f44֎9:$:4V.$M\9L΂u{ qwuZoN3g?2d.ˈ>4 9ۊ[e00Wq_OgCJYc*s-V^DŽrgtaeFhNw&7L(hm%s'Lϯfeg0T|(u8!_ o$u{ Nﺡ"E/; luRTsu74pR׀2wιu;K S\qM-:jF&WP^xp`yS`v7 Mw1oghhUEJw^xU?R_-Ptg.+COda3-swBs!C5kY)e cx1q[. mr*,[-p6Ce:W@8OmBv~^~ nd o:lhQV>=z5-D-&p**%R2+CbTDiKf`*1,P/( 5AwW}L*Ay^}HJ[Yb=2P/%RV$APr Úk8´b ɑWt/;iִ"ip8bwx' Cs6&?Fq'NQ,>f^Y;Eڻ"/l}EVK ew!XrK8[ ܝ_gC4b8<> ߚ+vBqӝy'{?ts{^Cְx$1X (^:;}ETv#$001:>RLNe|q}<\ Uy@cl,B>6isiT\Z뮁o{rYbs~d5O2~׽It^*" h"gpWzykCQ[ mݵ<&K0F[Sv/{=dڿew6ߤ^1-I\σar18;xKp4jz"^-{/BU=eK{b˪nHU7*VMQXނ:?SH_Z gtg)[=pɥWnu>Ȫ^*az485\GB$u5Q FfChϬ0gs/a9:7߄o_woN_~u_}/3oC(~% 6 ?"uW?ܡkT߼kЊmVM:gR+-Q,;75v DW_ԆZYׯF4Wjkf.Fȼ_A/S{=V5R $,KKE>wtTL`ͥWL/*?ʡOq8XHi%&Hpm <87Z\MzMxSQyY9rE5?#=3{W+t1Zy.`<oCqŴ VpCDX߸4╙ո6f7&+6:B'jwklly{;7(o=Р%\tc!4tAA#.WRUОf$ *v4IOjN~$IQQ^yq#^ףvԣ-E6YF 0^zBwEra/HIN@y [??篮ƹT2&/|~ -c{bIQ??Gw|ܾd8U:͋L;l.Ǯi]WsRǓѧA8"9sg ~jxbo vv1 5: ZDD)1(61P?caQtw N>SܔM9H]ʙ)gn dbSYܪUBn'or7u:x_3C:c2\+o)p)BQǘX;g?, 3||P9ۊ;|g8"1X :K@@0 c8r91P+C7t4)Ř>ug0>@uYR]RRBUj[RJ TbYOJU\5r szhƽ_/`J-6˫ ~O4ч –lp_N%75C>/v4: ya#eQ x4kt5(i9Hjoq.` Lh6_4vl76x\%g:bׄޅ?{7 *nHr"ɵ ?S~8[ix}и𗗣ax@OfA|> ^z9W*Ⱥt*3!909zqiˤmˡlsBZك:Ff~uYl`faI7f; M:ͪZ3ݟkr$ B*UhS((gRI7Tt6R̦iȎdӢBUs8;u{{ϼђB:a8b?r30eK:pCFH$yȢ _yq)ami;Z73+ìqLҔq(pe'>RcB [dQUΏ*BTLiI,)8A@P>3 #|0J>@m)3 T.Sg&("j tK;C-z{kS;gp۞Ξd>-g(| 8KԷ3?lP +H[?7Y`wYU0 α"(Ls%ex W8f0-G jjL:ȋ# &u"Ji1EKP$d-҅ːWng9.n lmG;GZH`lBD5ݕ7~F3S2Z 7:we}wl9AmAX%ݾ j_BwtdK*[bqwE݂0\vewn=MG ٮrt6NngݡiB4A-o݅"EӑOX5LvC*S ϣ \GHw ݱu )k9Q?c.li):K [ ݉26SA ݙ&?{8k9{㠗-)̹(`vj%[2&$q2;~[8NI.`Y_%1vvf"%ޔsV_0ZJlԭVXf]F3icWro v!BRf%ŧĒKsXb SXb,iی%Q(.'YYmFj ܣ"?LcI_ Do0K|#GA$$ۃ;w =sT&d<4 x jSIʹ1$8` հZ!PDo\0Zi_YιǻwϦU jȴO8тmDXVkb\ujQ^p60*24gޞMTa[6{l'酝я |Ȥ߽>^'S v}wre*1 Wx m>tg$gD^HV.Pϗ> ?^|*BeCW 8x`&`_|w>v啄3_{ir\C&>a #}ݼҹ1ʘ0H@"X#icE3f0I֒'&$R*CZmQ/KGEOm{uiK΋s3c:e]%RT7ʺMBK킦g7 \QimŽKMVhlVG U6ot\gNܬP^y쭋V\Q-jΗ`V;/#8䥚]G4<휻M[(Tʅ/3knn:[>C$lb<*5Q97R<`Xƽ`k1Fʑ䆼ߏT|'og DT!"f a&XD0%TsI LdH#F j(P*Arm A;/Ew_y4dX 4&AHHC)V2% b%L"DD$Fa *I:Dry^~T){Vw\Z@uZi3@ٕμ X ʲ|zϛ4Ǭ,VY3k(_q .Gܢsz-9/v}307*:Dh !ŠӔhCSua45Zzۜt-tԣIqaߠU;jy1Ċ8K`k.>ҤSzibׅGӣJN_ygZ1?<4;]wGxybqhQj*J)D3zE;S~ezIXhfJBrs8Aa BHJ PW@rN$ D4(E9rFӅ`R +Cdd1mw]wj@0)B38e0\2kٕ`[Q|׆&S-Љ~5_Y}e}L1vYD3s?Y%\O6֍B%zU)4uX@$>vǾr-?bpJꍷvfl}^xFD*|{_d?|w|ãӗ/^ܟx1yٛݣvpz&nډ 8iܚ%,O^~1ף]*ȉtOZ3杵Mh~cMlqonn0|z :ѕob'ooW[/8!M*eo{m^<[t P)LrϽޏViZk #WVw{™r~w`9u/8|I>94Td16|bJvԚ37{RPmGK6F~r;9xv_{.'>&Wc?Ưg9` ^A[.nݿ]djOz{%]}J;WV&A={^~-\v8 tvbS\VA!_9_v vu/Î"o\_XjT0!K3jd(;dzE=fELv|6K?ϖW)/=)@/LZV0 8#21E˷P=j Pe^b:0N|cҎX.0vRW7cUL/ +:WAP2uZ.V92|t*y/^ƌ8}F –R4E0 q:;rFj2mD IΐF@SmƐp W+<<]|~|qs7G hQn覀Phsyvdyn8ƛN P ϻrwNi'WP [|aʓ,#qT뭳'X@K7r $Z m4l &ߣQv`5*kef0(pVK t7Ԝ9#/o\<ċF \~(V o*½lJE usG$K1[("7( ?{y{Vq;`Fw4"[Nݴo[/ Łvs~@kX&Ƥڳ)S}RV`_`Co&Joռ7t_xWZ%r jI0 b_eUR浺j3v~6Lჾnwc8Wf#oLmL5fZ Z֖xEu 42F0^}@PV( U68mÎ5dRW2`>C"Tgq d{rÆNn ONn +$QOyJ3b;&*R 2a8LX28R'$8OxM:aL6琶́X/~X\z2?6?+BtCy'5e"G'1gWZ}_PFEpo?Lh=ֲ4-E02 $H$s V&BE L  Fi\1JV  Ca,dm(13B"Ze\k#g6ʪs)6" IA"HF&JB(DJ"t)ҁd0 ,C"nyQ֗ )Og UB H9TU4 wmI9d~1A=,.}ػWIkTHJ|~3$5䐜DĖ8ÞrGE ?:kVB]P. åh\JEeUzǔ+d4&jci:e/}5ˎ1B;1C@S6S,Fo1&IF_s;_ hk lNRC UǞʤB1рs?-SvR'/~Ao)E)Z_Vfdz ST txG5XwN+. Rn\yG?7|1NJnl.:kH)jþ倪JQ@Y2ZmYJ34kD< |i{C<80ʆ#~xcӢ Jтd k,)Po7=!89_ ś䨯 bvv:.Į7IkcApiܶJ |K|怽b #Xuˆmg`p'[@5Ç1!}b,L"WaV|tG`RGTPJ-@;yXdv_u8:_{eb+cp<q3mB@J]6~Zy!Lr{ 1%0l\4x,Ok?pH"5mG"F%dZ.kU_|{jjJ>}]!*3#Y"+K/~ˎ<'>_qCV%cJL;*ɐ3sU<,.0`LP †وɮbN/5ηT$/z޶hrNPZEĖPƽuH ÌМ]np.+/ry#*qLzz~sc&juh<'&Z@ 7E=zJM=]}6(^"R\[Gz~Y>'0t2 _~wsS{Pj0;j$ p9bv*p{#CDHbQ7^(GV< ^%T^9\;xm1JH h H; " @=SQ#p+?"C!mcG1LK WJeM @8)biCAr”a˵ ##D* G!=Rr'5\꙱Kt}wK3$:^|qیd 1ţ 25jĸj=k+DzgD"'1ØjgdE|weݠC$ͽu96U]_۔m3CpS@H# 9J㮹Lp˙2PQRq5Ӥ;m2FpŴSqh`^x'h4)'u\8flKŭk*1c)af3dg&;3ٙLvf!&0.hFQ?Sp;%@~~`1n0R^?wc 3lG=?#*U}呸c:J0 $h$[S3vy9<&"~/TU3QxG*G_ڞA?c"TKI*#UlyVI~^u;ߺ6+TkWqx[HL!oQ4ܵ~|9+=~>TQ}悞UxօۧtأE}G7$?^&]?f%}Uau[UhJ.'o}t}(΋_>2eFb/)I CWck+ 7-Sz|Ǥ_iRW3د "[NKFV#̀0UE尚faL)s9Nn6,??j"N e(\E XJx[`I(c)X[g" <]8gP&NS`oF͋F'\^΋n%2ˑvz_1uhS?)xkzV=gDvo{=j :FKt/̱û<К;^")cxE飹[Ƣc\W ])rł9 a2.䫪r`}n^Rc ZGK!zƌƁZ`1S+j$>CJ\#㭋S;>W_$$zr^U m4"1̄tb& gs3ueȷzz){:S/: 3` F\3> ޓ2 b+;W?g=?할z|pF{RJ>T6T_ytR=)%5&ްmNQȇ)=ȵAʾ%g[B=rcEv&Z;T~;0w7OFo)ߡ)tUA>W2ˇ[@§ͷ|<~>1K$1JP}81TmUex+2ˇՏP1K | c^P )0n0h|VJJ nj[mWʻ;/jOI.vI9 kE%;9ehFuG(DRuh-y{Uvw\h׺^jv#_zp(!S%ۅU_oޗ"xeׂϛmOHϛmp2c"ndGCR|c ¦>N>Y/8ɓC-U ih'R8T c&K>|ZDceO-D,y+'\Yd_bI۩4<0 m*CA¦<\I1IۏIڎq!s؏zs5*Cz Y@S0inlM@/:?"aptpmĽO!0L34G.|?pfIig{7ÃFi9{ ~Nn9z n9{:?͑f"6f_p_PTqn1*^>j\?vX~gخ +rfǐ]5Pdas92_T@oX3BY^0 ii! Tz7@O6,Mi_o@4Y_=ho4_NA o_ϋ!D!eְYX1c2b=6M x2BZ"bcUbãV@ f\:ׄK^[]ޢeы n3ARJܾݬl֊kt~u'vmN+$Ttj^@7RQ}f|twQݽ*w{{B]/ mm}ͻ_nv0 ;XVV5ih؛p;-"c/V{_ܺ]Ăx?ome咜u_X ,&壪V-yQnKӵ"v[.Vk#1Dlalp9P}zy~ 2ܸOi{q]A#lاK.#2WԲҢw{fs$rttF:STi|  9`H[.GS`x$B;l <6 燎r.:-e˂r&dC:!/MhO2VNΛN};XָO[Iҝ{?k $qa %_.5HuN؝ާ~upmFӧ0^QbxAW\Ep# K4!\ܡ+)(]4t#" u1W  85iatH]5^v?nQՏi,ucRM 3 A.?)RmՏ9[9 [PN<&]Vw#x5.E&{k̕cN3_K+^^~v=:<|Nܚ78dKlkT`4~VR0 cwemItvE}06ٙDuzu$nqhr:3<*S,jU7tU7iybyL`X'XY B>{W +{x}Iu\uҪ2:GroÑM*c1LE_5^Өx_7w} o~/wkVߤkmmκY-e)w ԁUJ#b ~+AW+|xuq[+5;5A/eؗ a6U+ 4Qo R gW!AT+9X U,SWwj9k9IR^QO}Rk!!iaQEC[J%Eֲ1Eqs;aG?*#zԲX^Et8`}LD)#&"&:r)(W_i7,±dJ16^ ±g!:r5@5Ot)8„xxl눞uy lnG]ƠZZG8U.\ `z/C~Br9h{᫨mu8vh.}% _NghjJPsةި ;YxJVsziDhx ڨro$Z3-vYGNIP h^0m̤ @FHhOb;u[#x՚zM%P{n&#Wa5Ç-tK#B"M0p3$W&asڽ RB~xQ` [&܈ ד]gZ6~tdvöB3wާq_7#?'ϱqEM ޯUhmkPɶ_U߭kH/jPI'jkFf)H*g\'l7zWBZmw l^H I)G1xNY/{0e?֪^ΌG}g:IH 8) 7bbK\9^m; 39Ma?NT*}Z>*a3,E(?FiT _h-7D" O{lΠDuSi\`T~T;θQEoF%y->ah_om`ԋeq8˽LybLVEWBß/2|ֆQƊCCa[*3Qw謨OǼ XڴW`uZ@{a/qHr)@y!GpHmYwbkۿSǦ_w2eHsXu!qmy_pIO/wG /^t#<%{EܼɄmPS8}7׽Ѵ D)L3QaKI"nDZbQ2NSR]}%懞too &wsU\,f9v˳E>/.{^ɛۊ=_>j#%FKP \pELìL-A()9ӞSP"$A&Bh5D,=kFelyANcCi'zT}i?{"iCS5qi5xX\2c8?_Ikau.9qF&("8J#(ANIOĎ6z;zU0d=n:ޓeNϋLzdz՗hPJRF/" ņ<bFT'dU@.jk#Gwid.3jGt9S1- )^u.ln/>&7ŚkED{FUzbaE>Aaþ5Jnڨ #@C C)I^GWjdgd; Jsfȣ!dQD5Ncp0u# `T",bY@R9*[]/sg6?xt{=쭂DMtv׷Vkv9Q/w>ߢ uN*f/Y+?şΚfJh] ́6 #ƘC %26$<\}|Ӑϔh.`qna)`r+'6=KR>.4|oK&(Ast#EဌyPnZ]*F>"!$sxtOsMܰw=ݜްauT>FQA /VvU[EN{gN W[lܼ = }{oC3CCy]nT@&( Op̼^p9:-]^UNBpp _m0 ҄{3nQmK'3z(-i>:1isyxueVw~)L%/jz7G `\z!?Vt/=+IrYg@ "3Pa=0鿏oW୻h, >\ m$BҞEn4Z \ cr^`x/Ϡ`60QD/t#Tiڂ)QmGsȰ,A[R.k%#RHD#i#>dۦ0 5y;jE^v99[ 1/{,tNkeK`UcOag<o*/?}N+ŧN\A~T|Ե##s!,7"a6bm@1TUwހa,ItAxkb:RaG $5 "l!\n{JWG2j,1JHLҨq$\!i"8vaK mp5| \>"RhTƐrcEEI)30c18$: iLMty%RxHkS@hvX)06>)eZP)Yk=C)VH%%c֠k0VUOX9BH-rE)I"j+QZhKbKDY"uħ&m;$XNZDn LBBZH.}!nRKa9v%SAQ6B!E) `PÌvˀif#˄tidTF K (-((u/?9yb{iV᧱Ja睽>93_ojxHF0:Y@myJ3oXny ZK8Em|;m*\ 3{jo՜glnˇ''&RЦ2VaCeQ$S8+#N+RHIJ-O rE؄ A2 cVA,ȵR k^jF>ӰNN,ش<]9Sb\__>7X$=Z(WuYaCl(Bn5K2~di8n+S,"E⵽~+}L.N/yD4O5x[oMcF*OŸװO\UҜY8dØelX{[R6t*&~/; ۙ]]9޼>}?ߟc` ``Rƶւ ׭($8{wE[Cxӡ+ ڜuRk]>eD}1nl-ٻ إ,7gӚf%9*`O<ꍳiY@%aAC.oE>lB/ |`_5/~Q?Z&[w$q•$Q-NG\Dkc"*n;r-% D`(딤Ƕ6L׼`ohq؃ʶh+@іaO6tT'Qi%lCkpX _r3-ɷ6來hbЀ}v窿6NqQo*OW?iGu\2FzI!݋0UhzC&2+ȡxC&hwo7 \QFvVjһHqn~тO0EN93XV8d!lS VpL1.n+/ugyUB{RoAU]Ѵ>B>8#-R)Yj*:zSAt1t!nma EFժ:9]&S_myL5*Cz Y@SJx->cdvJ֐3;ڮhSRܷ b е%;Ah Ĥ lD`l \,e{ ASſ[\-I_z=ï-06.mǀr.ܢr%_ް8B@ 64FE˘" *KjU6Իo8soHϔtո|YT''ӆEdMG;ڒ2SmTS0bg%1d۟XQдsY04\`D$4y`N>zPhm^m\XH>La p8%bbJg1g;Jaظs[+הg/7Kz{"GI+N3[ζRXƗ}z9&L|5乬!f9M4PV_&wwukcyT,!K' kHFdv=>RHw̑BL~;RxG awx%:ꅜ+[6mTsmSer+4HI'yLD*~mfY:Qq޳Y{IEuͷ kaqk vQύIa\#2>^Rb;ŊN]+ILh_컼W>k5`skgy0|hi. T\Ў2ކv<݆v= `-vlGQ`(eۈP*Fц(e $"QTA;pűPLG 0k5f,`ZFL&ZAUHKD˾ᭁп\dn?A/~W7wW,(xj7p븾x.|EIv;v]&Ev^c 76Op6̯( ȕzs1`:Ρu1:/[o9f!߳Yt-shWڼMݝW=/LpCӻ97yCG؛p9&!¬7Hs:iӦ5)H*ew1ʋP/YuͺZ>uzՙHK.@|jdk\;gs"Wٜ(`:պS;zOUkP)TTKy%xŒ$:T8qVUkˆ^FcD2Y+ H^jʈhA #TG޵$ۿB̗v!,0\lfbrD9sqeRlJflfUS!eLDgӺlhZqx%'r<6n=9l(\MΧ,}ϯV KN%ڪNcC(CԂH Z"N'%јG"h4MDtCGR*:d&;kp+zzؕI8w u]M1l~,xanYR_>Q a%6GcLXvd]m젃Ÿ *7LAOanlăK$ʅ!^|tN*X,~.+7[-Go1Kyn `Vݘ<_61ۅ#Z`]h=`~ IMXF$<* s]`DZʩR ˽)ׇw.cG-K.}coL"GkQJJH1 N9ť 93]{)C֮ N\*R:jLOnQnnGA=s@3PE#&h 'H XkA ေ>r CXe{A:FHH5pFBb%#1))DXҠ8KFb'Q:̚5-k.55lJcQSqtHQ)RZ.X%gM=?xaN0UP d BpHRHI+ C_pLӎr ߭ H'c*S\A"Ti˅ X5B'ɽPW΂ ifTD6ܮ 3â`i%S9ڀE=fAtn[}֛P@VM#= !44"bRRa5 Eڔa4,x+/l` x`25 b{`":M T؈'2+RUa; # 1xaҘ?FqH2Pfq,(gn;m]7ՓN4HsE?-W4xtrqyAI9GDBt6 ޣqt(R-m^:qǟoaTz63ͮ.>6Nڜxw59x uS3tl%'QQ߯&MQ ~-C%:[bfHW3*̲|5`Ŵ|6II9Z[%h}jY"L/Z>CHj}4vWu(FH)>hnJ1ZsS2t ?s_?gL{`$T ]:@:p_EӢyӠ4|ـo.%}>rG-#ƘŒ(߯AR9ȹtjEzJIuhʍ$UBJ'#."1ph7Z\KI&X*0Q`Hڷa>畍8D~yG[ >mER9J0X@0"b4&x6oSŷ3Of[GCo2qр6v3)<۬γ=P[;T4Q *Zy_ԂrxQkxɝl9`^m\0s/}n^r-# dRKJ13@β2 Ĭ P F]o0T&ta&EI1\+,U4&E;G+pŖps֎nY1{2fMA$舓Gl(NMv1];V0}/L"gaR~1G`RGTRJ-@;yXdv/u*Ho/ c+c1z~h#s-/!¥f;Bwhmx´Z*$TɂĔ#"ᯠXh\s%p8}<1@,YydcڧF.DpCLDd,,xGI4 \O4sWނ(% 6,QbRM1Thc)fzK3Ld23| '74WNY7ӺΔ}V}Ic\1JC.) f#&ǡ劆V4t0g{-](-7K#-3W SѯUGCJ9ōJ.,1w,Őȩ,m=3{9A {v֝ټ.]XzM?r_Һ ߝCdzrQ=E `n#AL~BO]6VKxG ^<"Kã(1TKe$Fd S]%' 9 . Qf +o{˱:U#GHY,[J0s4:~`hxjǓs_TۢN?m#GY2aF,B0#BXYLDhA #(H.7&"u|צ.|> bvyl\|>Tzs2n=j2HebNgBo<^z,T(zb/9DY˲d9H{^VC H:)ؤVs۵߭Q-bmfiG=嬁d@e܇^6;kfff̊|bYg ЫgV{VaX'Y0uWNZJVKj߰y` \N1%]tpӞ$r4˭Lצ5LԫWƑ ?'2wܧ-kz4[N~pYU?pϻIp9|X9p]rՠj1@E7kf;cXbiy5 ͗Y-~b::k>i~ṥV;q҃oW~)shz^^7^)~$qn ֜!B(:̄f!$*8ʝ7\ 3f4j|@I^1Tk%i[CbP9h֛#Yv"{qa †b8rJe:@`\[BYvJHfQs ?=ZLV{=)@HRO `ZZ0;c"0 @#!2H ZLݐIYi\]oSֵ0N<&p\3#*!Vh5{9}Ė]c7>qJ3y ͤaQ~Js{ ÊV>adþyc\tpq*Sd-(ef[C12[:O/wӻ;L..߱u45nf oo"zҝh8\ɨyŜGp󤿣74lr_z>pԜ?ox6ٵݻwy ;a,|}&:dzs+jY".U~B.^x2vR'd$-yNIJqv8!C2}H83$t{ /}HsxoI\ Pb$! ,c AI aNZQ.l`:RbX 4Fah 6yJ4x0jgggB× 5ĝ7n"/իFnP^O.fTv'ju95"cFFfbN1Rb2ƂeLCLR 0֫N5*gR]Sј@4ң&H] EKfLz-*UvYW9ZQ~8Ժ8jbj+JjMs8pdž>mSw O!˨^Cji&WGxijW AE|i}LRAx}j<<_Rp 2'M=ff1[q=S7 CQJ4e(f\rAiMwv&fm]N]y,kݥowR'vtjɺRь$HPhg(CM>ުC2 %a )T P|KYv@4\gB/4pԺzCm~Ͼ=R'fl#9Y \iD/0 QD.l=؁ʾ KRiH*bJWϣFe x"pE% օ$c {D*:VtJjSU  8:=PN93;K׿9O^žgE8Pl̙R{!Bpg IRi1CeqW~7ՕmHsG~Lo%9Ojouv9oIr&8"6\:q7GoaT~6&3WlmBu~Ke`@ы a. {_.c.p9{=N'AH[OS= ˶nX[7hfy|D q0l`ż@o]&1h핑׶gUY:q#cÑM_c5LU6Vz(dM zǕ]]t ˇw~ϔ_IUKA{]]Vl5tԳߢ_κ|~Wϔmqf{~6 \ ;~q/l بS޸W5RT T24&ا3^ؑv|P(%i-&]4V3=lNR@Q3ZHir Iy4,% 68IOmۼQ?1pbA8A Đ7m"9)@< (F%<&וMN6uNvehlN 4 }v .GU>vkLf;ʭyF:U%hUzU8vB˓!s*(d vd;`rk;=!ssBN\eq#s]2GjRs ͕OJKq2*䩘,*V+4W9Z07s?d&HƂDLPTLQGDH'J96-QD3³|iV^tB$/-̅}}< 'li\_;]]o3:GݽB[nEf\# q:1ԉLĜRxt ~*,:ݬ,.Y1ޢ,N|A.է.Mz^z*Yj&*"iKx?TM5NMD;;"o9>a=۠?!3sOLgq83=R.-}L[zn}^s晒vR]ի3WH)8sn̜J+qЕ\bs8Cpy[_&A묉DcB$`F+#I2 J<\3AYm8 ) bDt.tP,@,Yհ7I$S54jqF$g\tI23,vIIPJ%j(c!)r<h AHlXVRKy$" \l +}.NHc"4 5AkΤ3%-1xΒO|-jj_ ?@po~4bY.xSG럲+M!/ņ}>lȚڷ|ˆbjr'ڵA)#pL(\kpM5Q* D(R&J|(\kpM]+\k٢uHt(YkpM5Q& ċF=M򢤭5.pu]ٝC@P+Mg>Yo"}kfxKB9F$㑅H>H'ZH0<`ǭOuo]נp`W?䅦j;h:,jw^M%AOsƻH0snvnL//e? n,eIk#_-{PxjasvnY )gnu<^cWP5wCc6*_ W2:?_i!<y,![nu '1beP}*zj7SڇiWG9\ 3f4j|ԩHbaJAݔTq/j{jLzXդ@`a:j3JzKl {Ml&Fmr,sZs~:fF* 6J er+,HIMI(53J}Rm\~v˽4)2.z}[|-5|✢r5ӪV6 O;)&ywZ1ZVjrsIIj^U?Bȫ.n :a $` ^iE_(j Q0amQJ"rE)Q թoW`B<UX-zg՘IDk1hU!-v;gä7'5Ք:(LR!q{hv%Xf y8:rMIw욮k)yi٧Ztm- WV $ 3 h/80࿋I[yպP͢S[WNڵvqMs-Wd<^_v}p<؃~wPdrr%כ%.bEgzFChٖ%YĪIrLQw <׆` )RDraV@x_[(@%pnxY$w- HwD `cRYRfi%1VBD`/ٿݗK= `g#Vĺ {d#BC~!!FPas GBQV30FQr"=? #cSM AL%as /g ,!; X$0z0 tx*CS)00y VYlB"tH\aXy0( |86> }8 S2Emo+`M/byD4OYnV0dqt(Rձ\9IYrbONnԨ47laNuj}5C .nKu[G%h$FmP٨ErFK]_!,#|q!w:TN^u6U378t ]?ɻ>`NO߁ 4Tڸh~}<;C{ -kdh`\Jr˸KGٸ1;p3[.ލAR,us7"K  )\]|曏@祥 r+-$0ܹ/Y"#vA emg³֛/G}>Z&jH,6XH$qE$6bF+й#RҠI "((Hzӆ%';(C`h˰V' [$e;C\11 1ީӉSgn 'x+Z ԇvvIQn35oDË) YxQ]$'U9;5ħ _a\ܓۃ*z(5y{3k0f^L_霳}zRGCQqTd #eyExYΐ)\UzEtI_DwZ"J&;$XNgF->H L::)瘀g t Ȣ`;`)"V"D4You8 )QVPI>u "Yŭ Js,N.1">0ͯ&dd4FM" *=YN,ڼɊ/ >'hgPc %p0h/9v3|%gS`I 3Q ܶnAvZ(ڜ-rm| DcQwTB.ި \7>b*kT H=bW)aW \7lV]gWe,+dW "o"DRLüߕr9[&1qث筄xشT2y_t}a Z1 9KgӯM+'vV|oUW}aW&tUR]Bv@s~gW0toUW͙@Kuv bWjíWHHnl:˸4|iB_6jJgHՃl.$cH:ӛޢ,dwh*p"Tݶ%xf ්f o+m#IswF6؇fFcg/Қ&$%YCUC,J\h,U%YQ_D/8C|4IKvYBFOПu<_uϐWD_F* ICtK!F*IR'ZJ{R<4$4IlfFHy\hsC4`Ϣs@ H0“:<ntҁ`y)1J`UD#Y9(TH^#I̢O֓t<\DDV[sN.Yd+R,%0R#Iށ,4BJB̩Қi$<&-tVZq}, 0oKl2:jǼ†D{*4"$.7!a2G'f'5$흓^K}tp ƍKjLz .I{Qp8Fȳ$ͬs022X=1 $=|pkbO`pi*a6_?ҚƷX; |:E4h\ '=HKH͌W?7}G-U7ڛj_mvվݵYZ=ۯtcv>TVk-+kZ ƣ~*9e lJ6O ,i+ -Z7?]F7e~CNޕZ+{%SO}~ {2d^]s"pduuOܷ>dG^&uzbSBlD_gBb-!c( lx@Ķ@͈ى1-z;|1_kik# L MΉqjX1ɔO EHRccgc4x]{T hkա{뚽uZ~kQxt/7"CܝGb"/,E(X 5Yrofe/L+$$Z?5 TJJ!G/*7d*㛋j'axK1'k׃@FCKtJj&tm۫;USWW>?%t!^3|1]3VkMT—kbׇy3U G<'+ VJ9U LG3wqL`VGgC2QtkAb2YAFkd(<ү6no\F"' D6upCr&s/XvKqMg}y=ϓ}r{1dE/sǏ:2Q :!?0M p*v8Oc1>zLy"LSsƔ ;=z ʟpKW5oMߍC/fy~0k盞oΒo^bFOt{ Bi)AwnJ8,\ɄtͳլE#O(Ѳ"[-ujS||Y38g̞1ߎ17,C[3V[`SԲ^]Jw- 5lH XdGڶ Wxz1tcY/Im{IķZ[L7GƑTDa:KߖP(:= v>֏hNF;|:tO75/wףe-Fޫn;&1q?]XFmz勖xnҚ.km@y7(jڟ8y;rt!ףٯ/ zR;:'] 1*VGD!)QX$7Ur L492U5%xʚZwF8> vS'nEu:Mg/6^m߰J>9e|O߻(ڀU.x])N?ಲu9@}O)|ʳϧdl0ysgƒdβO0I˳'O4NM S2: `RVqc@W*qA /2G٣7 1l9{Ƽe;ȩ$y>y^Du{>Gw[k՟c׹v,A[Y0Bzz|3\xI˼ȭKXU78IngW~e!`.O"sR|'mקFg3?4n9a2ooywbǣ_I8b6mvW;qSo)3ښ_ݷu=QL~z;lrݐ^Hb@[nl:;l}Կa"-&7/н]kV)&S)LJ@%bKuoZiM5X9ިp0 g4,3BSR"d{G۩i}7^&Xr>Asgi~`BOQvήIst<0m,6;97Ip99ƔC@!32f)4Wtq=xt&9 }5DӁOgrBn:T901fW8qs[սްf5sKݮ FÏB _Y5GB?)'(<[*Püyˇ_qۜ5]oz% mLa-sCZc?tD^e+XU0z|/՟'yq /jywpSߦn'\6L_0 Dl7d܀n6|t8nW(OLŠ]HLGa&֙q]ՕSո4V;ʍk-+4y' yYK /:{\YW9]g|))N@&9ӘmY&AăOhd2B}luz!mtIk^~ E!)O|cԒL;nZc[g͢ˤ._Td3b:'/M~gy]wg]TANrw,m=$x ) 6fE!g*kGBÿUs1Vqoǚ3p0 fT8ׯ]Le}IE.HcQ% m&le\R2 {/z-8,kc1x\&e %f#`H1adgz{G<3f"ecV\G@%L!J\Z\=s)99LaE8Ϻmg}Vs,ԂCKRaZA\8ʥ۩@{rPU_W*n*q+B9)MF4F}>u*|ܺ^rv%zGpLFY,w"jk 0eT Oһb"Ιke7Z:Vx0i;zr ťm:ZIl\ca| ǡ|Q  f4{bϨ>O?]dz}ͰW?7}G-^enjӶ;z_ l:Z49>[YWOwV[z ͷJ .{k5#1 'Cb օJ/u^9b FifJ.zv _dc UQpsqYd߂w. Lts@T@̲>ImqJn ӨlPl0QSt8!0i+ @KxNϢ_\N]gY畽55 i;sNPN$kr'a_33darkmD+Tҁ`$*xBbd.`,w(?64J6ߌY[3>?2ub(Lu.(H_pI՗\/b X%o7ӇQwk}2:;+7Q0 xb(u29 8)I3Q؞e$~: H/ ;?{FnT^H_TnU6'/qjʒ|Ә^D&i$\u0 iw|I 8b2Pfq,(gON9y5ƅNIP<cXxxK^WpJaN8379)ANGmQ" {9 DP, Y''; X$0z0 tI J1uGùPGa(\gqkBJs!jrjr }|jX|G_X"ᦸ0iZ4}''Ǔ[n2aX8S]#S+rqt(Rͦ6_.a8KY$sFPܚS׺h_ٸ>yrr}p)XqtF̹Tq8t֮-]pըr҄OwBFjKx#Q,@h}F~LvQ5.t=sp>\:^;*AGNrݨ*az6ja*#i`إ/Pb1>ŨuN)F:Up3`?:_ަ|w1QOr/o`#$th(ڪ$;CCT2s݂0.%}Ny͸+Gٸ1;p- J?]^σڥdiw"wM p%u]OdEu1-P%$(9U7*D 9}:$u.h~'iKWn$y7 ,t8"\bF+ܑk)i$ "mX60ye#>Ubm*thCIU!V6 1PC^7x`)0H}DymҤS \ËV)9=Wq^ 8 +sB(dHR cJ^`x̡-{R^Z&;J#xb9qNt@UP`+d "RPܦTE a B) B"I07r8- BnJvɆL B{H Q,[.OLӅRi̎1Ef؄)!՝L|P/r e>MBWF>Bsi%i9?4sNi9849?4sNi9?4sNi9?4sN^$0R4sNi9?4Whiae{ 9GRF') "R*lӦbu=E}sAaOa'vzqImp r!b)5G 7`Q@3\`c pxa@$yA)ıHY+p\olsSh8p7P֋9*M#6 j5Qy\Pa82\*밊ƥCCK"@S*GȂјI&rpT>X5qnD mNj!G}!xt #ҀD-d#6HiOSx^jN+v3({V{(GTPDTS|)ѡ_j1J"rWX|I0ޥTz0U{?U>i–sc̱r2>P̀1T)b+t@Hmߜ y5pY?[(o >dLI".IgmJ/磪`]0kxqG陼KyO`JH3oKA#.SK aeΠFj2h9j΃4\u0N ~DYxR%0$PքR`bf tPې';7xxs!)"Q8-9g//S>~Ԭ]f p5ci(xPiw$Ina1Vͪ$LW͋?#S8T)1<< S&B?o[/-n==vNnvg]٧dَcw)⤸Clԛɂ{_Y@iAUŶI$}IAbyyiBB4!,Py5#Z{4`jcI&zGl}n4Z+;vهd/|m6V!P< J}[kv 3 jTHCKK v-h3:T{)pN39H%T8 yf>]%nŜ%KI4 ]4Sxu?L 9MA>Z5i15~:nʧtK֤&[2sNz[f-qǕDIݻhuoڢΔ]_>y1X\b.LɘfNq kM\N1SrN`?.ONWY@(#-vKA@[A[SDGCJ9ōJiUH-m=3{9A {)9;(.Fraa>K%oN#9 )ȯzثOcU劌P6g 'WI"GmS.$NMf /]!\EɌޖX (#1\0%sLS@#Ӽ~Eiܰ>}J5.v]4bv'gE~Lt>Ӟ<X΋#z!Uxd!R&uA=0&ZH0<H۩do]b2env(vAcڙte$C艕ffAerr짗|2gYΚ9xĪ50wӍcтPOn9'rr,+$3 f'Ku5ru+ 3 ƢʯX&^1a]w>n#^[j߰y` ߃Ȟv>t\.AYcIr\YӭۮZ4-ƑTR}SO¹~ h۔ӴEzc{Evz{ЬytNV]'&UkkgEtmnFMTtǡ0t;l7 6DD$$6x3O@|tۓ+˧iT#ZQ)B(l S xxSM2T fR18}U<ߙ\k>L./VJI}޸Venpc`\bҥVpb#V3^TQٻud!P̿(3K!zƌƁZ 1 aJsͤ~JPEv}&=R ~_Xϣ ֐KnKgy5?a%t\YUUӤv4\`J;nƩt6 516ھ B[nۅwzt1ijn_"r#imfW0PMu~V i~j]{-`jغnZ;=4yf~F>ݞ<v%][:&Y0,֜Nts,ީKJf Q窻tsa=heyٸ3ϝj,})+&}r)d̉RqdK.m_ 6̪) TY;m,__/Uni6L;ܖ1 ;v-uxy9yʽ\o2G]Ӧ{Kbj^ik (kBA#n Ӏ x#[n:dCR-  |0arroDuvFj *-3r !T]O"J,epl sTp:եJGkQ:I.YBp/Q[W12"cLẰ/Ŕgڠ%%lBPty݇\\ň&8z܅\7`rWXoVIpѹ^]  Vu+ؼptRaÌYkDQAM0YQE :cU#}75vTc"k0DUwwXPO ܧ~j]k &uD%ԖhQjczkaL-c,`.XaR~::=b.4zn|>n7[1_믒cg-w %ȗZrV:Afx2F*A{Pr.`9VNF`댹債 ڽ{v䞵Lj*-V`oSU7dzF#$uV mZ]9]eL-E\)滼6 $*;ve7tۿm^~]+E I}v7}Y$<lէԽ:)6L@e :eL:Iب_jJՔ.]g##)syx2-]0z-h\}O 9LڛM:m-^haBwWآnYs CjX#ҙ Z7 _4 =_P;^k6p=\Q c[ %.kqxhf|}%0Bl9nO'ݜ<_6mKyiu-C^;T F'\ qeӀ x#[n: &"QQ8 7rroDux댜".FUy2匭ot`Iީt&n5-Dvn~ZS(TEP P=)Q[M*Ff@`̙֌Z3vR:݉YUܤ:m&fv[͓_4K7SE# ;cQ-(NU.)'h TmW =qt(R_m>݀Lأ;Uw{4tQlkG}7酷1G\A..빥nnD`8fRCy 7~|jI֑om6 ,Ah} +3f1Y;ݘޤRkQ z|KB8_58m9_#}30Im\  G*'~Է ׻v?.xuoXo`&U;V$c'فC ] ͆Xgh[O'|q)[mVlMn%@~(~ t;K;.~Нռ]M.+@6U} 7.f'+D|G QCڈH;>N7Gz~[LxaC.Gf[x;I8 )$EUh;r-% D`(Fe'驣 3WV!FXV -Z6m"))Qi%!X8,MN6uars}(G:->]vtG5hSQEMEK:O/i5_ 81sB(dHR cJ^`x̕c^iac |acT`Q^lyZLCB`zmR;ST$ :ݞAd "RPܦ&s:2,";@R.k%D4a\g䬆q{6N~#4eԙy+vsNǂuȇ##fH4N}FX9}iRΑIfJchB2[dT`S3N[;N#R BRjKo,! g^iK0<=pxa@$yA)ıHY+p஫wE%ean3s֋ٵ*M#6 j5Qyp6/r P0vY4a#A2?9HysQ \tcݸU[ؽ,LeS<]⮍}^mCw@ˠbƅUIQvRIwpk`eВ ho) vWk2opZWM9C-`zZNw#14#9{C<3_w=I0. _vo(RdKJo+1d:]!2/M-+@2=Y)s6) pa&EI1'01 7)3`ql%pR8t~Doc:on9v#bj{\9<#G(^f1] c\# K"v"MD<ۉJ;nƎDjB#_IkEv,:RjTK6;˜̞ˈ72[v^9]ؕ.lK,,hl 4 ݛƫlX(D(R!J --% ?}BG !p(%/M`"{&CD0E4>qJ7DHb΂wDðq9*+1fEh< ~F9k0:^/=< ܬN~>|ԑ1?|Y<~>tʪYuSsC JG S73#Ap2bG& $ v] ajp(}r"ıƻ-Nw>;s&w˭j#AG9itNW06ؗKh1'@^K' , CGCԑ茖 ;ꥠN ɭ =w!cFc*PƽuH ÌМ-o.WPhlqy +Q&w~>%6\]k=k:D}ղgZ`jp!/^ MfR:^uJ*RXdpnx93FJxcIQel Sǔ1Igq|4H5.Ubv%ЕY1L/}VXo2t9H''46#z!Uxd!R&wىhA #(H46&"}pPzdE%[hcSLj Bn7p4Lw Ln}f6=ۅ9EYersr컗b2`{Nՠqx:m)pBlӧ͑IrZSe6AM)lycf*&gӛG7mGR7νm^p^- )@0{ۂ\& !!Pɑ!+JėN4ˡ\|7欜WXyf,|lHNX9fO7ftl`q94s΄X$ǿDi R5IJ}jdjiBC uF]%regJID٩D%QkTWZL+ egU"w N꓿ګC]spX i :~(T(:\ TS:IT궽:f ’DQW@B+*QKD%#z"(ƝQW@+*Q"TJzu C!u3g0; &jk?^]%*9+TWtv)v~U"WuF]SWW]Nu%^OnB ^,W2: }|ƽ>z$]~?|Μ=΁ko@οZ 7<#r%ͱAPFbVV7'?'Mr, 7",0as!\($È?j 5%K?"}CWY7 \`]E8AGΙ17~51vodԊ[RZN^ShG04F[JhFz˻~3[-碫n1ցxrp=VNt*`RtcQ'\4k8uلdk>3()b,zmofqhfY~fUcBbP9!-"0-*wׅgZ6_'c5ghQy24פ?iױY /Iߢe$-QDԒu[^Dv;^f7Ee6r&wUD CX !T Iw\ytXBJzPN"HXG"\JlB Occ(,-} \KLR5"_%? F|{+X6Q2t9VYePo>D 4t=u}Gòhn./%s$rHr `V)y \SJRraV x$B@|܍nJpL7LxRAyl| ?]vm*%Ry:?LyD4LG۫ Y8:c)6 ހ&0jwfͳfVfO~|3f#S肘+eXr6T/v%H◛ċ7~;PH5!M0X0ևb\'4Ɋ]=rprl^`~ɦQU(b2]jIs`'URMq̒3gwQ^ g /7?6|O#&~O߃480 g$Gۣ:{Cm CT3jǸ n& J?d|? Kn![fʷxWRׅ| jk]TTc* yI QCڈ)zTUf|SL9?DJW:8^NI,`R:Iq 1Vq`ȵ4hӎ7E(IzhCJ|:ňÉlCh˰VMGqŴ ZD,DkMV6uNFxnFyl k#/|GፗN7oUӋ@Vp`INN2֕D৞ON~ɘqRlH_J'?o,(-@j9!^Fs3$i19sJh/0W<6@Qp5),=wPgדeyΣ'ɩeyrvEOGx-ۘqVHu@BIdL X#xb9qNxH LZ:)瘀g@ [pc Ȣ`;)"VD h$-{ :_'@sQFi VdrNep`*qPU[y6318Osmq؅ v]+u P0w8h2H$314CDrE}2*V)NNG2xZDN;Ww2D8@NA.D,Q1^zcd !L.iMb,ɫ1MF{>9sQf,.) Eii Mi66%@n3`q8 KTz/N m楛g^) ՜%їgLm >jPAt[1]c+<_ E_&mQ+R @6>imw1C#t. lV Ąٮsm@h4Xk@h;sd@hm[??OӶB䌂Z*gRbr叫Pg\?{W8 ]leUG} v~i`Z%yE$)YE6%K6 (&Sd_Nrf:8]\xb҇UL8!&dl`T Km8"x d9_zK3caiDNH-"NAga}?RUI]sz^tHcD)5&[A{t0d)|\rƻ_H7q{'5{EƧH$EG" GyX1{ D1|{ F($3Ts.g&wz0.ϡ= =a#Z!ـfr6S+թf*k? xmNbݗFha8JqI2J:l "IĭHQ,JiJ=T,N<ƻZ?DelMTc%f./@M]4^b+1Чv}?oN9u,Hə*.Be@=COh5D,!6aXoOwo]^7cmf67׃|藆i;k{ڦR`NqzƧ(0 ^nNRnln l{/ i$ktW毶sr>D [[%Zbs7M.A Ep58NZC/ǐ%sd JdKh\}ycBxDˆw^]lJImB<{ّBen=n2gL{-wJZne tI/0 IY%4~ϖ̲ -L 4cq{E9Yhwh6*p#1dp׈z4<0I~4@n0LPj;{S-tmns5ʚMtYu>nx7؛wuu& ^}-DՙD_6z~xQep8hᤃ ^X0)e F%R&b&k|6p, Z!2#e1xޘ|h>Noo^YӪU7"ϒǾT-{Qxn몷kvV"\4.o˞q^c \STfqZYtxMIޝ1F[݊[iFx@opKR=tq"%4x''a(aHQǭI^Gu'I0e&q4:J=$1$bLC;ITY-ԝmR$vㄹsp M8a)5̂*ٱ:f|$%IyܞZo֒,tKB:ng=?5"~B077=uxL$M[:6~VX0 v kΆ?km,%_};eY99^y}l$aQ-O'+t9a((^"-CTM8h>CS !>Q`$Z6PnIVJkm&1rCb b: kp3Bn\7C^NT] wQT;-p!d,*4R%x pD`-uTv!aϖ/{eɘbJ3bTiʂGe>Y{8YEπV]Xˆ@/ivHI;k ;PtXu 'aSVD>[<[=z3(Q\OI`fz9  xEOp/mJ?K'"prp@\Kn'uJr?Z-KlUTR2ĺLb.zPLeș|ggÅj.dE[i,|4]Ntho[<_$ϧ,=!zhӸ?՗o,nQwsgbQYƯ⇟Zy[:+u 呟 q|;z3߁ڜgz3H]e5=uUkE]SWWJ"zuՕJU^`%٨LR碮Z驫L%罺Ji`;k/J; n0=۸\~6*SLq?2Bs v6*+F]ej>uuTE\^];zK۫Լڏ\Y'QkۨIJNꥯ`.#Wt,3 ſ?fefH˵EPSE R) M)**LJ4>K'jpY7)4JEf?$ ^%u tK-_sx>]h.zJ_KJ;G3JT$^KZ-^w_(IDl Y1,4%bzWO{ǫJ|jz9f&oij$+,&1gv$LQ&5.pEpki$KtFrF/|YQxooFDI}R[1`.<=1(>E'CN[*Enpj o])QaKI"n28%4%zbbS)QMaZ?DeW+gTbbRs|1OW2!Mۡ_@TϘpZ`%m7Y͎:.ak@͘S) ̂.妗xb҇UL8kBDQ),)qD9wb&ˉ U|i?:0*тψo(<`àFw-ݡdeF2{s%%E`BqdxZe//zuav)F^&ky AdGֶ ei ʼ-<)i%^ӭ&?=8BRVI5rh~nZf<©:&.:0 X@QcNZo-}.9Hg p35"f6 LR?Cnv!> ڄ݄1'5lqÃfy`޼ӭ`3]x'R龎 MŖ (ʋ* oG '~A47rlg҃ʼnݭtOYr>~}&Tr2Q9Y`؜/5H y\te)"EOF9 JTʨSQ)kRwN8ƿ 1wY;}^d5Y%C:i%WWDBI ,pn ̙CN ܩĂ)1Z.<%0-g课_a\, 78*.[{[!VXvYwW^}n~](u9Cm_ͺ nL_pgת=E&!W5;#xC9Xts^pjViqV{]v=]h)l٠;sAwғGwT.NI%8xeq2 q9MLYF!cF0?䢶I.  oǏ ,sFr`$Z5!s&bbIb.*N7 k =I`#*Djv}M8g?ouﲸwQee ;-2?MO&iGw^VF:I*-).yK 7A2Om?d0]O-\/wbwvm6NZ^zr<->0=dBw4l4*=ќ wy3gwsymwSeϭs7Όp@<5-SY6\kޖ1DU U ADIBJtuZk5 )p!)@+ʅ LXGU 4Q{( &nյ>X"'&'btks ln*דi} #GqxgUU{Ϡ4'qO(gѩ(U1SbBF5>XQFs$gPp>!*n)&4Rm[m'Iƃl9#e[h:[x$u^plHQU3i|;ӽ̯(m?}b ,S2JHq$Ky&;^Q }bKZx¦cl`;8f+{LbDaT1e9-’sy,V5xզmZy{~ S;T󊠍 tp=@+x^.$b6֨,6?%N:[1(-YSCL ZS? (#Ȳ̫VkU&@;0Q;݌89n <6LwI.TAמgGrGX<0A:6E-(%p/!ĵ7 8VA,Qs LjMm18 z,Xp+ 1'Ǚmh 2m=r~DӇ#إ뙒/qj7ș/.$ɷ[;^J'yDɰ$j ` x@||2ʕ$稊8\WE@+ ;t! v$xql^3U腡@ːt"ÀM!*PPȣ X  8u6 RVʺߣ2ceVqKRV_ˁĐ%3Èv;iVS>D\2̈́W!8TePɛ mU%ӝ"Hd2,bP#KnyD p4:۸9&I-u8 S^Ȳ:>Y8)]/JLwz:qIba]eY.s9+Ņ|b1]nO Ap& thQC ĉP9v9΋P:$tz(+rQ+)'v<{2<}SN;" *8G'$:6=7 @aNnEJy8d)UІ \dq)W2"k{=mjB%C'Ȝv9k$98O\0F1?+c>]ԏ7VKuwU7\O..;H^sixQopo~4IcOBn鲩ԍ Y!}Bf.Y|8=^ٻMkA5^gli 0^y!t$w,Q|68TdqJOh17"=םF9%[U(U7Է ߻@q wo~zN$\Φhx=;]Z57bQOG9~7~+gSn@(~f~}yw\Ŭq/l ٨uA޸e6U **CވH3>j!%jiMx^hCy/[U4q`Czh r:;E^3|"Jk"Ɂs'iѰW$$I֫ 3WVW<A1NP1MH>Py HiRUjkMV6uqr}u,/[a#GsAEƠA /^j:`>=?vqOIBxQNFW6;%ΕEV NBi`V"Vv!)% [9OU~~N.J2 ǓqNj )/jG-it֩C<6ϼ Sjea\~ 6#U2^ K\c%I)O0y Yʨsx.'ô1bv*1uAv(fS1..Mw3 ڄ.4)rAƍMI7aƍ(.RUcsoȷ-~v`Rtt}2^iѝ(ofaLRɕi`Z߆(~9Gq ?>ZK +K9[~MxonpP:q)Kh&\/ 6\o)]&:y-li-ϩ + Wqٙ'JI;]R68;⬥]Hq$%>&A묉DcB$`F+$FO49ؠ6@I 1I=@Xic`G)ݫao3 C}7'ΝF`8JDrE$9Ìa|$LHR*Q MFӷT A:M9MS UTs4Rbr&-mPvPN6ѹ6 <؝Xy 1LHx0dX 6@uF[R`:$FD"pSfN;A(㵤x"1(jiRpK( 4k/\^%6y͋Aj:%(J L>(L.7%J?SGg*vQ/!JWo]*!0T[xBtsm4->-vofPNc0?zd=gɜ7Öv'6Bnda2 _.?-N 0Abg+=g/ZM[0+*݄F*e]A-~n}?{*Uph {8{ScìذF<") hHTng4q I{ߣ,)wVJ}Xg,ȪBᝉ H-LB`% QPHs\dR`s*H˅Az;(r*}kG']좝?MW\ /3rU,YINӫH+9 N9I-\Pi4Hr)a:A!rYH@" m Ihg7Ed<zd3.o#% jZ%uLA _BDpDy &g F:S"k}kqʮB)AL_CI&/s؍&[=$nZmֶIdV/kWu95# R֖v[!n K3mDۤIM#|a}*"1])'a16:{`tFbh!SW X1hG],g}%a:f%Uz\Cذ؛d6j7eS#{ xB.  :ރ%vw&+Z'A裃"g@gˆ"'◶8Fu0 |G@A.)L1Q"%8TR0 3k'T bš#3إ.VAjoSCAV0J8 9r{10oFY#/A[YьV3 d-Š.kHl҇b|> AJ._>1&!Fuu A (v PJ%l-ѧրhg]+y r.Az-l blw!3ڄ`I[vhX<"(Lg@ @AnʀG`k? LոFP6 E¨4,,I1dlDH!(VDUb|؀`%!ud18MUJ"42+ Bti[ q/GT^A Z6P*t `F~+Φ%L0AlmRjh\1ڹymN,fi=:]=2I߬ Dz qu$JN"ip$till%fuPp$JHu\mT=kM!JKBq$RVO ]j31iG4aóa#·fĞT2=npJȀ0QsE6G=TʍmeEף:%\IuAAtP%(H &Ш3ES'2(>QKdhET ߮ow`E^QH"NdviP'7 `!g"z7+V2Z8 RT XQTb:-pTuL\t%j#*`hAgۦwZxnfHkԬVg0j̤yfj)څ'k#DT赫>@MVkibUw֊ `mQi#b  GuZS0²M+΀| =ʤ蹐X=?HAa8QZ{*(=u@B%bIm(6 qLp@a@\T"6FTS.*w]"AC1 0&ʥ`'T$1DЅK0p$x !K#r6Lza!(L #"B`j?oZy?泳厖 e'Sd75@3iVQm9~2UD%MMj}RSM7TonzIIDEu:t/+1լ>"h:u.!βP+iXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,9XJcaI^Pn=أHt:@< uFJJP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uV1 u4 wCPc9s<"htE@@g]נ+5փ[# W,񼯌*s,Z{Pz W_.g4Z}H>l+'gs*vӫmrŧq틳v Sji٘^PoN_ۧ\\  DsnZj!_|3Y2;?ǧ bBf4ѡPP{ܴIӧDY]QlT-<=r9(ppdzFq4o%vPXмטm==FRҲ}ӃJ<£SRlO'lsHS SSdaX'7&?7z׊LlE.fMJ>͖ߒ SEe,_ίaퟛ=Wzy ?Y5MutIQdi;sbolx՞#g bx]Z8|c\rqrN*jh??*S:x{7f租MXX>ݼYz7bt}GDZ@$Tv`v~o6>;mk뭠xcϻek d^;˚F;s9=dN9^bq|7Liʄ>r:=={ʷ.r*{>~CXyy?t{{N(+ ? KA޼Psf|"]G~fWs: m˫&NNA},j;7hىe777c~C 䶃7'L<ߒpr-qaats;.jazk엇+͞311cުC ;;C4`^cJzTJ6##@mLjqӛ>?6LXxQܭ[?raz{D!ib=ЪgJ\0>Y ߽0M}}M:cc/f6.?gnv|aHgi]1@7.{}ÜwsDjY$,Vx~DWxs!m;RdM `'̾ʾ6#wC݉g@C?qx9F||4{d{6{c.roN.c 2u)U#,MGz$z'>C <_JG5ߖl} {v~6OW~q%|~( [>dt6oU y5ww^,1V's@}vCVN=-n8{BcCF˃&{2@_o:5g;T"B!1{}tq9|n>Zrݡ{^|v0@>rru]O؏2ǴԶb,'N-G4e~q>@? Ym=^}}Yb7Q;\;V0i)ݵI [}-;|pg&kv]SLwm$94i22 Y, +EڼE\q|q;- 2]ܾ yy2H qj.^z|1kVJEVNOp4۝'Dxۋ z~u %ӷr]6~v닻Yy0oyvwByBۻz.nɓ=ZKLŵn//~]elnҍ`gs~]$w P[:se#>|^u[裲O]!h1B- qPkݻևZC 1iF%KP>){ 3vZ?Z#/LΝO~ 5#rk%%ġ(;_M<x`ml%wr`PQhDP'yBXgdRhBI1GөGu38744HHd Q@FϭBG#=)Uw[7⬯@w[sG_cͱS=vw6;~1`fc3oen;r'ӽNxdt. /խ;ff(kkp-]ym-~-?n-Wf] ;"Ep&$i2 $gg䍢H٨%P&&3,tоSNW!VtIk^~KEU Bj :kĎ}٭nI O39o?{ D:I.'-GKVywΑ4g:8(g'Ʌyʜg-% K@2D@~ 2=˶6ey rVp#=ZLFk9-vElagq/[N>zKU첄[RO4f/p88K;fꏏzs#PkkDpE]g{(䨑|d]tWu&`۱c"%)݋GEJT&]g8JM|8̏#p.- ĒMI(>" @!,mt8NA,3!J B[="%,1NIQJ4 vfސ99T 3rL_!X`ZJ2־hyzFPt/YvSmy-7/i]r5| 70z-DLLyITM@xMCkN84[uY\ giR$/YaVW9n,d*8zR4XevA](w8y/7RVuO,G !)0 od(۸r^3^k(e̱l5Cؙ݋j{iiEbz@ggRZ;z& 4@*NQ&GWֽAiBf" +Rt H/vdbk/)3g@G,J8=nZ_eKA >/Q" ?Nk* K(%W>o ^L7U^0]Q7E\5R'A`Q ԛ#YѐR' &L\wHe<)tF^[+*Ҹ}d Rqmdp3V+ ]lxHf|͛> SCb?ǫ7Fym/osz|v~qcOZjA2τӥI*ryM%0z qIYIQلoϴ_^}C 6?9lq:0-ɋ+b(漕kAj4|9k|IbG/cm#Y:қG:oF $aօ5)mѸO2Gë%c+ *Q6j۳ʭFu\HXu<֗Sƹ*f9?x??lM ?7槫88#q ?Տ_o/ׯ/āƲ3Z+ $ x ?ah4rhҊ}.z=ƕGnwXqfq56 $Oߏi, W˴mwUY5#U(Ui"|k`Z-4TU 5U<5&ě3]ّv| m1նxPJ_ln%]߸Ti5h6ZEh=<at HY&PNSG6nC8x0EAH)24(F4D3ȭY N':S]^JaXg2qf/u@ܳCv>hܚTtK~\էmL/2yzђg0FāAliqOimQJEU&E z,PJT;U} Ī3۳$å;gt:3[>lr M&k^p sȸYJo18 $ST1INn Etm|ϥ4ҟnϰ7/qVe*fGՌ+G?6`पn~b賷'ݶW?7?ZE} ^vWAztbv8 5"}>VJ֐E <}zG<h!fYYi8) 5.&xUtV: j{0z||!m99^ œybn ӡL\@iu?`þ+,,Q]YUiVGrT!M MYa*R?jpo[[OㄎeC)Iu"?sw۾Dy7 zL9kZafo˳(%~8 @\*ݼg5p~#e΃8C?xr0<:IZ:_Na:D-GfoK'&^lv>_3yŖS% dN_懚Oh&j~ř_v1/jyw.'6_~]ΫRb^ѵ_?H+W^L%[o=ՃD;՟]`uR@(Χ\ ) l~:?W]؝.vnpPzq(qc^׋>o9QLfmV{YZ3QTNrph]/.v{|um"M.UfEZp&!2: UTIvZQYv`$J[9O['HxL4*yLw\Q9{lݽ0ݷ0﫩8B$9!k!MN8' $Y)KȒ1&sg0}mw-U?  L `!DbG\e R@3F?7z g"2-!X@LX@g'0>8AzuM"ٷ>1MHτ4$30.Z2DL@P- I#CNK{; ҁuޮf֪`PSL2(Kdr=,B-C/TU>YN>#:"sE[n\r=2\j5*TZޛo\Ypk.g1|ګ0 80{BɆE;+| .m͢|Mrra' |u#K%{}RKl^liƌl)*]å~Z f~zr3B<1P?H 6UOk||{TT|=l$9(Tp e@ 6TPOH]et,LQZٱL`jԕrZsj7r_;{NJF^G]F%?2u%vPWSW.=j L0ГQW\CNE]!L%5z m' :-*ڿ_Oaζఫ>WmÔ+l;33oP.1D,lrڥr|kK#Nk16O^ю-^IF[ nl0(.i\$9)8$gͥOryP/,tt;M6Z9bimeR8{QqRi)@FBtI2,sIf_Re u$YSS([e7WH&o BQD ǝ#g~/ϝW׳ a?}!۟ / &L|1X0D M'|m,bmSS#ǔ1#8=8U?1뫉l8kQOɇK(P7=nƗQ~ O`Ks<(%8xI)mBzF`̌bVhG (&bvNhvKhG%F7.O'o[?yôt+{+J-)3v_ǗfAurQ͋g]u&{{' Ӓ#ar3'#ݭm"Zb}7M"j 29Y;| N2WHf'``>; mWN(^X(&z21卩ݽJl;y+DN[jis Ćyw n[Ben=/ԣXxicr;MJnvGq$e4Uh~jLԷ84hrIhGk(j֛{fц>Ng p}܈=ix` f3[&i:6a;mzN#)hE~@pμS*-l֙C_łNCAQᄷI7 wi zw{:'. p\Dj Au0I(a*W&g;КgU?_7jUF7z8>qmMG^ ^wd)d'ם'JOx&pBrbA~:2BJ>LǞO*| o0F2(E3IBRRaGԴh؏S8GMj"y!M1|R #HKHE'vҙ:1ɜ]<Ҋx;?fvie}":6F3A{KbDcMjȝ "DvM<0*ɍS$\)׫ly8u2.$#Gۨ&[UZ21nJ!NóZvYw-tm~| -fW9hqng񾧛َmhߓ^ 0 ezAM ztI=~T]ç0&5Wpw^TnpD~\gQ->̤O*kt85kLgѥ? wWPF7+:ط űJ嬬#\u9how_ qkH;\FG.{g$Ȥlt2\"$OIr1)D=qO܂ :mJ&ԁs )K$)P5"PU1Uؑ9[f $a 4AS_lZ<oqhgn6f޾xҧ ZtO#"ג6&ƛ]3? >e>Dby[. h'vu%bf=/\7fW7i4|ϣ=ij|Cƫ` w=In5g?kM&YyW˝[mwlC-t^i:s*2KύjyxpF R´vNmCT^ JL$Į0CgZwֹ7œ\u4Y@ŸhlB;袦X29;'k9ڨiЍf`s -lj{;lRnZxl: ֜ɃS1##3^1'E 1c2!q&pAUQ YTFT&HI.R3HR^qB<%9%x% 8[ש{vv 6k+:o'""Đ`MO'B,!a!事{b\?dž»ϮExxڡOoNwe!͞գϷ};zoY{^{N^Ͽ`39I*on33.6(1S"m1$Y D{#. @xt=JhC EB"p[Wyp(7qUN5_(9E4~o`Cb|o&}$ɛЕs! jʛ2d@: l8P41yJB&Ռ 4#9_@ UO8l (*n)&TR5c1r6k(Uta1K B Bƒ_&oӐӋ[:cGyz j+؂i$˔RF#b c\e0 q)ɆWBx҅Ou pgYmrD3tBsIL(J>&5٬pcŸc_MKjyl ;o &= Ir,DA(ap #xEI*mIZȈ @+QׄHb(d  j[M9Ĩ%ϊX4b1W#uӈkSm)(OAQe_`E`F=s!H䜧\'( @&)MyЂg\pDTQQ#  ͭ5b1r6kĖ} Qwu tIKZ~w(p~#"uSjO6ͨ(( FB%1IS8YmXP5RԂh^ Js[{QP-|~DB@ckZ'XƷ哛].3!d˗;Ź[eG{mɱ$j ` x@||2•$(8\7W@6B|oGBEϏWj5CBӓd@ n4*h(QlFqㄡq?AFfͰ?.ae-۰ pe$1'#*Yfw*8Ek%QqY],A $-InGqKmMqQDKd4BJ#B" q(2@)(73ߪȕK;A*+H{EVBF\ 1e1T[4QI\8m\Ƥ>lMѪ|O52^TӟqP.{tJ~Ͽ5:o=RA8W p3_a._y_]A)< X5h5{ȐpQ>gIB ;&grfGxg1*7usmV W9j=!٘c*DC\M.9?4%}>dIfJehC 8k;Q"e~nDzjP%C'Ȝo $9[>q]Lo>^N疛]8$ߎ'>!' BFҤu$!Wt6 kFp,2{É~̳w?Ø޸rQY?&FmWL!d$,}kxs׷X S9=p ⟃?SS<۴`zKd?:ӻ)~û|?'q[+ $Ym$GEwm;$8Xb+㑼俧jɲ7۔-ytl6U|UGǟ~~ģMG&8Ѯ[w~s[A|,_>5fa$寃?}Qi8 %vNtPAw&YA\ j8,xuQڨ,)QU!NZ1ӥَZL "eRK 2Z}%D(W$/!Ѣ9yq;w4)z*dKz49(ϰAaD*+<Ƿ5,ƣ.aB{9q};{n:H6)Lvk٠`< JF MܔL0g2[gX0mwe`mӭsviеL=ۓ)諳㫚oY{M{,7ÇހތƼˀe5_#yY#Dvs B)C'ظu({^ń]TX6ə9V7mBD ݛw6'~ X(e$1lP ΄c9Z!I ?_SMp4!FeSfb('C,y*-^ * Es=,nih="-mmBǗ>Q:uHY'䇧uSfum)dOX^r|`fpX IzǝtrK)6|,L4N*lb!FH{%R~,Obz<.8!>wQͱs` *3BZq=$dBUTJR:ɫX[%2 |4ނA*!rcJ1M'2G*e|WN6*rw)7a=MCD$ҁ0a. !JuDȞX4dxlꓞ1> $0YouEi 3 I/(ԁiKHٔ:=΄ļwه3e0E愳 "[+pc(,Wɾ݈inFz&m'jX1ɔ"&e#$#r}/c'$c4:Q-ku=6%àniڪiݧzf ~qu!=Ӝz/iWKǣ5ɏ"eW_}6NR9( e`.bV?ܟCG ,wiq&bSjDζ/y2dS.@,tVsP,(#ԙ,zd{E <@H~k|)HY,Pe4FIJgcYF AMJ$d.f4̠9T' nN_oM*ڧ|\qzdh|&wOato}olz.yh0Cnpk[)D@vH@ $ȴI(\VP[;n-#M <[lfFHydjoHqc?4xgT:xtP⩣xƓUVky.%ǂȚkPZ3dZg/ IdR?̖s~Jӊ^T<p]ZD(w9:4#T&ȅ )k1Ttm3:HB%jR! '#އ[dPQ80oPE-! 5,AVK5x.F3i8G4WYdݝ'PF.n61WqG&O f(w)8quQ,}ߛԵi>kh& r5hLj sީhmp3~Dm3*yHa4lp8Z[n$ ocΗ]pows{@}TĂF t (_+*J/*\mm<|:qtNS*є=/,oX lyHאsN1DAFkd(1GL]#^@ҳ0`:@.їtЂh֍gՆsHfO)B= >,?IZ`%G8 ak]N|N"Nr-fN bza#4_GwBPmFYQwz7#t҄ȓ\ lj//7˧u.A-] %/cf9ښ H]m|̯T_w/af'[}](o]4Yw I,>,,NmۏT~Ŝ4y ]S0K??zWROr׿#vh7JR:fo0rD;LӝSpsLxCB BAN"4ҽ%]8r{>Fs>iaҽ7lg|zm$5~7FKm\6Vp=!{_:XOHAXg)~Ȧe!0藍v2dNs:lJ>%p̡PGT"f.s>ɮ66 ;HOnomVU[ 4m RwRC䆽4m=]w]ofm])ݛ?6ȝCg[57~FQGrw-LJf޺bai/[gAv޹]v>qquOѓ=Z0yDÕA\_4={ IVO^irj'}Mgi1Mg>wϻvS@_ZôFoRLQ*ip3HԇuoZiM x5\*ȳ7BRYlؓR- gN9^ln(;]uއdjrYVr@C#P9sS aȘ\!bTL烩[ЃIGsH:0 c].'ϭB#= U.\k8%x#zr\뼭Wo26sKo]~ ?-3G87u{p7)Okic$lXFhxIT d &z{;-_Qr\%RH9]g`ak[nEֿbq='}4_Ӽ[f=}]R~KI=ǯfN 'Bp`0JÔQhy&9>Dh1L<`RF#ى,4-d,L`Eg'M)AP~ɋCRHrc9 2*ۦՆ_D复 -4MփN:y*1=b1)b2rV5ߪjjj4SAR@ijV,8cdo ]DNAZj⤫[8KȈФH ՓBI1 YyGÍk3)ښpk(Uta$]g ׅG'^>-jF7Moly__\_5].u{# !MVGȤᕴR!dAW)uN/&DR) F7m&s9BAR`rH*kjٯq<+hjZz#-+^Fs);H3xd62/b)CRrVhQY薨`6t+ rEVtɒ#p.HG1@d}msׇQ v.T4bT(*kDk^#w3I QJ:g@cA`L%)eQ!h"ۘ䐍2Z%r318L2(/r +jmJZٯBq^%]rbt;kwbY8#OaҮ>ms$qRqT>ّmmd|Bl1,.gJQVKA2*'UL$9sٻ6$Ugoq]-8:$,8!"e[9w>eIICGivTU$74]}jnR[~qw;>Q>=cEp|Bf_-Xz5#]lVEHl3Ӛk9 8:~R.LUZ#E)%(kp2AQ 20;iH'JrTNXO' Bd*TE4&Afi~\J8sSK\& HJQ>14gZiS-;#gjrMC?k%KQUԧI&/G(l>zɠs8kϲ%P,HJSYfR+Jǵ*nǂDJ7R}EJ ](('O3GrlXKy`u@-12m8Z"QJ"^B0ko9:rS(&6F \h8Q z,Xp+ IquA!kO3ttG`9?]Iǟ)\O|[nbK|kikY^r($C3[5x XFQE'PHd!$Wx/~x][ udY/ C5!yl PaEM) - BThhvGA]f\1bSw"oN'𸄕}ma݇qQFbU2Ӛ3:Jku975$iI" 0_݌|폵4#HE- L) C \JA 1VE\",0 RtW@Ȉ/"vAYA 3"''8M^ ё GcGoq;HyW#jue}Y8-|-)0oӝ)!)NGzTuںrQmWp6aΊz[\Q7!j 3!N-`] NĻ!YUބ'IB ;"'rb*^I5ExyݛEU(qZOHt6R{!n&)Vnp|V\%Ƅ~W=Ӭu4/>!YoySxu=]x ̾ FrKy0JfnY&Q1~֞;# y˶aX0hfYޢІ8 g0b^ǣ|嘃r Q>!Fm{VLŨU>gsKd'>w}]b6N -AĿFSNɖN?vWȎAut}usꇷ|}+\q30>*Lo"V] ͇FghgǸu-nM/Z8W|(|?|[}~znO*.XOMSGlaEi1.&G!x">"Zg+e`]frg:o[g{^QxK) 9eH?SwwwU# zL1 aޏp:]}{Di9օ8wJhH9wRɈ16τ={V)i'z 0Rg-b"& Θ T0Q"\ Iv C1N.BMab@P|GP:7j4!ߢOBg)|ф&/_WuJ?K? 2@9(e?]O hƫɟfLRp\J J-/gT0E#C56\!N]2nP}ۇ8fp7y=?jHX֚-Ih|[c_B^RGR"Tmՠ!lWBdi%nd9 $4VtL>8RqB)9`:o BK@N&bB{ձL"jD$7-aTJ2^K*!B{@晶(.GMbnH=oYgm-.7oߛw T}/-Kix6" `y> Ԯ.ջbfG,Yz"~LIٚLDI~Fxb26M2\~ƈ> M .?}RJ2>0$W'FeH2n$ߠ zm3Kݿ߮__ϗ3YU/ftsc^Q;\^i󣬔E"[eJ,oQ^jF&y7j{rI22)!@TxJ WK&6XQknARmJ&H5GP*D,N@mrpfҎ:#gϳF7?:%h9 J7fw*J[kq_G @ꦚ23o'̖]V_M>qZtvoo"VX,WvG;6r:A2OoOKl]?n]6w9vthmͺClYbrƭwmzyxw3$G+-d<33h+w8xT[ 9Ln[Oxvǟ7%j6t?۝o8~b=yvrg7p7@v%6lnuCiMݦ&S yp*2fdd+(!&c,XƴQ<$$ι `6P8kT4*ڈ0Jh֌IEJخʺ"goY@uڼ|6nMXzvEom6p|fjCcoaqלWtD$əRhJOMHDoޓGny]ϕCॸu}?[lpGGBGr"u~ks7?YN[fnNؤ;'lT~ I>lfׇ ƃQ.pEIX)6hd%Ի(G Xb cl 0U( LwIo!Z橏¡h%T9|Ƕig~R%4Nn;5CF OJw>wnyRu:ʻw rs2 @vFPRE<%F A{zq!d\c:Ռ 4# -.G!(*n)&TR]k# VUtagq/]hX^T NbKMMg'3'iPpi8\c ,S2JHq$Ky&^Q }bK:x®cl`;8j+Ǹ{LʵڅQD5vgn< +hθZvZGm鰼V FFd,$ɉIJPc&N;qonlJ[@R2"C*Њw6Eb<W#5"5bq|2RPpLu9&( #]ҞI$rSp.HǝjDh#*-6FIJS`)UTޕqdۿR-6Co B2adHʶI f_oKϺٖ|cz&Fp0&"}ځWI_ q1m"9WVDXK"J-,*NYa`KgOFC^5邗w꾔ALJ ﻂU1x]n83{~qzKIVHwҊ}LM0xGRQQfÇq%ˤVNzӤUy,"4b FF-W12e8CJqP) &q s*iM ͚%, 4mSA nW:8-+_W8<(W(KEU^ik8/Jr%9"1NHFū&{ o'D3%X`!irNHEE $k EP}%Nj8- ʞ[Y3qQDbW2+S3kcϊA IJwzu0ÿ}9+9M\h hb3Fa$$U&p(dN5jR%S"5HrH3,rb?#KnyD+AsoThtq!<[vΉs^:Y[Zl~H; =]59ݸz1Y0/,X:ŜϱYSM}p<+Z~]NnGcq+dEN P l$\PB9a'#rr$;K׿>L/ObߔZhGj9P(׋5 q5[ϟ{yARqCaVN?bd+_{ggc1O9<牿SXw& p ,6^<){ohT bN?-?fwW7~~/[&fwWq|!̥~Ӭmiۥ͑b0z5[4n?!JRdcI ֗t6҆8V-[1AzГ2{؍2y%7R30R'y9u$,pu8WX Sa㔞b8ۘ_w>)Qc⟳ۅ>p ~zs?2}ޝx#8F6ᏽ`lw>hTX޼hԊu(&5j1)R ?^}q~M|;)9ͬ+^&\WPlTEFy8JޒJ! H/C!}l dBo:V\ 76_ٺ>ڙWz6/k9R)g>)x䴰F#N(ţa /8.)IȳInԛ s++ ֣.jt ̆R$<O%,@JOaL3I^pؘxd=f 8jևl7ܪt6YC:V5\HӼ[[LNĩ@MDc7A0di⥖֖4ARE,_i-ej}-u8b或O' \2xr~I&騙PBfO"AQHeSLI.!'߸P'0kL4\A+KCOʄsSß_eXl;a=K>:̝+s1£n}`XFIP&sJFƆT2bM3`u -:~u>x[?^@lfٜ.Dgvw" ˯v纞`4!!7PP`PuP~M훙X~{;pRүr8O`.oCtq/q9Oؾ~2sXi~RR,8E$dE";L:Uʴ5Y~OxfrD08Q`tS٥ռS>~V2/`Q, 3&3l<wR̲^<%P೷OO ju4~ez~7G>$m+jeO-&Y`%DKǙ-%%Lߢ[M9g;ΆSǻk&ǰ >P1@C4삚}Dp娀XXT'볳cN3wX,lzxo_uU"-0PZMɉԂK؎u!Ǻc- 9楕H(Qiξei ܳQz^%ſ(12(M-Ζ .P፧:qq,<҂F0a}Y 4R}n]2 rU"qhb.Iz'D RKg RsJ[OfN1耧BG>6BOqʅ^>}xWJJag$`mi#v0$]Kit@`ik*+[WZw1IJ<;v=05͘zcz=B= \'ee\=Jtpuhc&eep!Z' gW'Hd~R OԲEp+[WI\W &*I)TW/R:jJ٫Qk!9vJR2 +Epk[WI\BWIZNBvpJjÞOPF,P//e2-c1ǘ~0L͛6?<Y,f!Ynu O"Vu8r U-ZmAغnjMjіV`ѯIJ/qXi&X#ZWI\њ$>z_H AW/l˘< u㋋(g\ XfQ|0p-R.\"ͼ8c֨L²AA!+xhU ^|\=׃IJ}pifY)H&˾WQeM&JKiQJ' h?9۹MUɼ: SP) g&eSR? *|@%E9J<'@rfdY8iB{= !I+R+VAT H9_t~~{[L[= ЛͨL^;ڕAcv>dwݭ 6$f? y5 R"Ao[s f7LgqA~ xGtz0)IqYxFu |ڣ44[K& VO'q۳*|4IIxGA_ Ւ *6SPU-6ޟVmOgM%ZaH3'PY5ۡ9r mּckTzlmò0K\=R/MK$ּ̩t. ^ys4z[|߲"YHiݣJr-tCv7!NgojrJrdz̙S+AWl; ^WC=,w53]j7T8[utd95WYNu^mלKG>swNqw\0+Q:Sуp=\Irx33E,QYZV;:~m _54wBb#P%#! Dh,Zţ4 Rɋ9er Rɦ25Y.U4RP!u-oƫ&T]ɥPSnUvAR:)<=.Apv!hyH&)otJimv9JԪXQycB[ĥ-IZ~IJ<cS9D&{ LSO\GmYV`\z̅@74 ʿz߽ۿ_=d<|:AG}Θ13i,bqb' #QK!=rCXo"g(>h0X[G T~^qsriu1_.PϹu{gj__L 3̠Si&,'1g)b~.f p*XBYZ!:PݼNvg"ګovEFg Pi04L`D$44& Y=ɀs8g#8,JK"rdKR&}ĒZAK Q_)`5KRInbɯXP~zuV<9ǥO#PInY~j;@B?_E˚ aqsB=Z8EN}Tho+W \?pOykt-Yjꌣ_U9=\[Ksxf3w5|8~H>-$)_KW/zk]-,T߮o]LCu.kv2N 1F: lp6HF7g)Oc.* a1VE{ra~xgq#u`>,',9Gp/4̟-fpX۹Ca&6tnCq-9o4iJA9 ³?]F`Wno>7j"sJ6꫋qqVkbZIߢhZqa]^Í\b3 /;3;_Tҿ6>i5T+|UO}~U+?bfR/j2\ӏ7 hOd=KJ} 1M=)X#q)Ó1OGUr/_JRA{\v޿~am.Vtgtÿ9M"k/%%\c n /9 Q ƑVl-q[kUysCh;7W(_\{fKh\x2)l75XZӵZJ&\XZtZG}dm5%EN`b (Znoi/xSC s״BUbϾj3ä\b[}]̐[ޮswڏc^xCc>2Nad kgO$2:Lf eCd2yj=:Hz r}.}qa_,}qY֐|D _9ܐD QT7֚x<dc3ᤶEA.D,Ha, By敶  stӡ< #$ Fj B2(B;PE "^qZT]P投0+FD9ZM&;Ohy!DZh@24:Wr&LI5F.e5Rʃp-1 P/Ag ϕ>WPnL +"F`04uJH 5)po)@6 B12b:2\*밊a,%V<*%dG(4V踤W:_%2pVILG'Ze Z [#KQ[2v$iMR ex;Hʆ| zUSM#%nQd)m% 2 @&0?r H9i?yIc"v0_Kf XZMCFkl}UJI&?aR/1",DDS1)y+D6gǰ5.tLŚ9+h8$2MHQӖn &!dC| Ox^np#H3qli*XMq+ 2`RT)Dh2 `|etM c"pLp gVK<:t.5Ej4X9~߼LGp(k@V6H4_0+F-u.p--dp=j`^EJQYj楶E@ۖʠmkLqbi^E"CuËJ1a^cY2X#I&W/կ Ko#OfehJ 1nE@ss# R`9cb߾fCν6(^?{ȑض^yșd7 l^v,l0Xgdɱ`a./-rٖgzxVd}$'r =}fEm6>'#];(Ct.*V΂F "(WͮޞJp`ȟi]=QuVxʋƭz.j.[;E}z֖l騰y{U׭ ^WrbڜzP*{ӭ$/[^fjUbqI8׍2 Vh9#,%ϔb}f,!$X, <#KہOw9д}T6%YK;ťy\AH0ޡt^9;uWPgu[%gۣ s7]Kyp>lx͂w8[]:v~hڬ7^Yr[wmmo;C6fCznæ3"|C`)T5V +jQza%R0V +i&Z+WW`oF]rQuE*n]]*շ,٩.fQy}0u'W+h_Xbhxcd*v}^?ޛM};"cn\2Nl" JW"ut_ó3mFY.p _)UlHO?|S_'-«3/=S5WEx ǢsrwbS鍲Y#rht17P12Y? ҅s즉؝.KeGW5k 5)&Ӑ̛4&bP>8|a9H .{1`K5ڥB^zb_{Yj} bKiLB{tnᒇVv(]NvIȨ:)!& : Fx .LY ͕>*&`Jp@O)!熆.2:}>C **Kp5rK8+!ʛkoV6JÔQhimY&AăOx42AdvU}rI1=ܐPd%:4rWLN$7VH-7^Go-P'W">݂i Mwӑ!o$z6Yl祽6U-m320%+%I< ae΂3FvJ[@Bw,gm@Jy3z]r$8eRR5c5rk(Uta5KOӅgMJӛ]JGӰH?7M.G+JXntltҺ%L3B*I/`X ! r2 2ӣK{T`{!EQ`T9bFcY(L M)`ƮFv'f\֮jm^Skkvpc3a4Hx"A kldN-(CRrBA+o=K.X+;9qSi +pGG=wpƺ<*Q2qkPq*bwS {6erFWf,Wx:tRL7>kQ_=El_ ŭX̓'-`Ľs gY? W0 T`"jmLy=fnҽE\8æ}IΧ4n%n6(4L!Tt+@%iڴs L7Mlk]d[QV , x> Y0i=_*eVpA@[W.y)Aۿ6Ht2릜siTtҚ&&m7|δY;mv/1q8`IK7S^9w d$3r'mJP%M $uO0 ܥą`Q*+Jim^J[5Z.ybS(1-vMUuDžahhKW<[+v\-\w[Ot/HBP !}ф N•$ 6>q k> -2G[1 AFӒyg^ա,M RrPmapTT: XIUHHUf{ 214N:0ŧ:;F%ⵅuGB6=d'-sf$V_җI0l5K5gaR0ښv$J)PJko`J $2Q9erLg& w # G1ECDR:?%y 2MRyJ\z|zj5]\«3=YYvTOXQ;.ü7&?S8ARGټ1ŹC2` c{xZ|>RMq C8dK&'k24tF (Y\[ %HqR?27[sZ=^\kkO/f'ޯ{hIV\sꮆ<~:ě]s Ɠ{b;XHu#v #QT06Oe.> =sx:uOx;GlIv5Wb~6uZi)#e`Myb<(jd*9PCr~v R(QNIד ~Ц4rh]^,JQs> >maXmI狫4DeO(],a\E] .Bd+eƖݡʒB*V +=ҍKې}~Wrܵ>&e#_/q)ҩpcm"}yAZPdKY&͒mv6,U^Jϟp8:#) STQ 24(h"yM&p\~}өә١W=k& ['LY')L(w3rXyc;;O:d6*،U1j!ij(feƹF0 נsb).dI=+*]E x-}TgL,} ,a5t` 0)fMV(7 Yxo2Q,YTƉَIC74WDz>tOKOL?.iF l[ω=|%Tm9 jLyN7\B UdFʝc-Ǚ=fO;ϐ' PdVyǑ[tB$˔(k\)"ᅑnJ(e4FIJXJYr1,EҠd 4)!2CּsaeB5rvR>Cp@фT=,˔|8٭l =vuti麗66Ljc>/Ͳ+W^>QI_虠y콦3j6y#nڱ [0W+Zm杺`)D__gnѰɷ'd[*wRBRc0"Ō1^39S12KQhmTl䫃tI}}皕6*P~R; c9i&d4g"pɘO_$hswHg/SC=YQ6KR."SC`4!DE(O'H;h76 !Hae^)0jX#0b ːq p*9qw; ZX9O=ڼFKJn)v[er M: w3:MLH0%e IδS(c6j<Tģ " jFb։R!iDAXЋ#@rc@qt mft5r֋(h3_\5վd[I=]\]ztgw7_`Y{6?{˵mO?]v_JXUAqsv"vvoImg!YpȮ⥠fJiRP7܀%r EH Ճ$) Rܜ$9gRR5c5rk(Eta5J.OvB7,6feWgO}/5x?]ƏƧϟ5]#FKbRY:LzɅ R1f᢮+AfdL (jS F%+\=. <ʙ1hvZ|nqǶZ= ؾ$C"# zv&EBnArD ul@sEr)1Ȋ-隄," .rf>#h^F%(}шc[(*kD1hA#t qs0IG̕PG&$r:dEH\f()y.h"ۘ.eJgB 4%&Q^(+* k^#9/,µO{ոd[h*E9A/^y|G;51A]Su/}{|xy }/ާArYݽoRjLǸK߀J娴kާ}w?Qʍ5Ad"" ~mg GMfUSZ_m&eKh0J`&ɤwepI%$Ιke7Z:̏G#>¥}jY:}qj|iDm2QV~5!akFM>]OW߽O>L?Mz,BJq\>1{w\5 :>wԳru2x\ رJw6^BPϖ;zt/^h4Kh~+@T7&'r J!O&*3$I爎{ahl%瑩1{eI _18 B^Q &9M(|P9>)Q>vn4%е4|]v@^(0wz֚Y=0%-L_tBIĎ\~>Hk\ Q>ಱbrQ@Vϕ ^TNg10]\ 1$h<0  NFJ w<*:tRT}YxҶ}p8_ft=_75+Ir;ꚏ9>Ƞ1:"eit+f4,fl˥vnovjz|ݸFVE7Z懋-π: x~[9|MFoxbXc/|I7IZn=߳sF*Xߕrl msOuƝs I{Lhi@H9,lvl=݋G|Lt68&J(XåI(xƀ-U/#|&> 5wRck + 6tמ,C3tɼR&#??_(A!FAss0; a)K !$DMU  xgqnhh ӑp9 [+FzYj䬗qW |=/VqMљv`YeINۥe9߿;.yAb^ZȼE*o/[6(_x~U&ꚼ~#]y.3M)]ՓLI?T]^}r."=+Zd=1F' l"&e-ȅ`I*K$ITLKRHHj::cvjlOԦ{dJ^PrZRh槴žΡbřoY`Zn/iYvΒZh!* EuĤ+k)I @צ6l5-4 ^ M_NK;\Ҥ(!5@!GeA*S::/I"Г )Mʬӵ=V["}PZ/&⥅uO,G !oPqҹ^3^k(I0c y O&qZ6$Pg tvV(o`J $2I9ert%ն7(ML$aA+H[J2??mᒿ`\Rw~w+b~bKrX9w?I&?h\ZM7AHqv?$SȂzZeFp C3L|dﭑxS< 㫃]3 WU$qHY$7]Ѽyq _ss0<Чv3^>V-z͈[AxԢl Aܨ ޣiv$n3} ։3Fg:L֣y9 _-;o\Nfђ8$>nn#Yc rr8yɲޑ~xaD0J&#M89o`ŲOW^yzv:M;`G]DO }#6CTl24|o05g席1{Kn+@R:74Wm9^5">-IWln8yJ03AH)XebiPUsg27[ﳠ3*g:S]~ܗxb-&,;(yn _P1ǯ}!;6sI LVbV6`oTtb1jbQ%-R11 I鐝uѤce2 ?#Ӂ ]>JGu(\Vd"{CEb̢2NFzH(bȼ0ms>ٳ!ȥFX 6d1_bjKm7'yͼÞ|ř9ZeL4 UhRYCZ|~w 31?M.Fy1skf5&|C謒JԆWQ%Y7"9dx2duTYNLHYJ+9kGD"g=^\<ƫDv{9w B9!k!MN8' $Y)KȒ1&sgώ,U܇ 쏤 L `!DlG\q#rK./(Rp6PDd^[C{@u޵>q#/{~*|7*j~uhH\Cl)_cHOi(Rʚ Lw׍n0&%!ÌN%Q>2R l>'S'jD$7aUJC٠%$< BP-0P"}rZ;!Aeu9H߶}.WFjgP<;!7')[)9X)PB hk䛍Āmj) DIL8IU}EݾZ %T=Xmn| Y; RqKi w; D/*.2N+i@7 :ϦÐ_O?~}Cg4ԏFP|0,(~t˴J{ƉAZTw~G× 'YƅC"W&!nt"XU0cx!w$Wh c R瓗 K@kn>=n}C>_ZcJ^PDS=I(TF~g* 3B,.:Ƨ :pЇɨ}iel`橬ԗZFz!RueK@hTBb /-:.YKeq@2D'q1DEp6()Q)FXBSlCY eyq@ųǨ&89?2n^Cf_,?-jvX[&׍9&~D i|U}džBI%p$ WS~ Dg|ݛ>4T'Eۃ;} R4A8't[ >29۷|{K V`Hx\Ţ/p}4ƵuqlёڒF4=}^*J4*Plaafh;Ÿ~o|"Lʁ'h! P9(I"\ "!gx5wFReE(!jqrR!Db:Q"u;1=S #Z7 : kyeUo'W0s05JKwhpx"XԪ 얦MM ?{;qޘm-.MV?1z[.j~zs8xKg6^oͪA|')}F “a4lhovwn̻L3|QuKAPe<|a>lGw=byMԚ{9֫|k84vDZ)\6s߅+SuqT}wӄ!"B #/X$kmvomb7!>V{KW"(.2W yaUZgc*g }j2yU`)U5uĀɰf85c2(PR jM]Wo@֕{Z,6EmJ+ZH%3FΈ[)^L:c&WΈOgT6{Lg\bW3.s,X4 -pt @D:KK7-à?3wS_e#^v9wKLz6yd kd e|qsr$<.fpFK8C%794I?[v'oU6|#1D!&,@< O%&:{&'-#HU( AL%XGьKn&&4+9E.GU<.0-Z\,|KV^~҇5ko7Ak.mUQ}"7'kP#-GSɀT@c(!hWM\ k|pՌh$ yO 8P@0H,:&%PUQ*K֌]3nF)Gхqƣt a]h[]xP]=*kNȬo6:ɯ(\7'߸LS%qLIH$1qAh%W6D /B F5678b4P@AO֙$9ё8j#J9$,)B+ b"$vIZȈ @+:pШk"0s: /8횎i2ggVgx*1xFd kDjV#nZbF-2,G'`m"9O(iH6:ѨFmD48Iiʣ,e=FE58D8&4wlX#6F p1.y^T EV/zx[_qÒž h#E#tgAEԨ> P"@QxUt4 Q}ycEΐO&󑴬l18Z<27q 0eV)iM\˼N# e&$Q>piԚ:\h8Q89G-IqQ[%i1rvik^㴉E3%P3_Y613pKT .׼嚥׸̛mÒd`Sp:hƚ *Di(\IrY^*ԠUA[ʐ-2K1vxqźl2Q* )m5T82X)EUA 2KQ۫WqcPȸPB1;-ae]?o=J 1UlJ9}.IS$p|>`KH2\K.D<pDfBHiȈH|+!H\Oup s7_\2 '?U9A̅gqvuk'c-2VLqF !C%v6$gZqNy8s1}lݩϮ5rQsujz"0{@h91}w-S{yU-ws锃IUB |r{P(YS}&ʞ\S]7l]7EfBT g0b^ev6>dr2 n0a罎gf2;>Bnz AdqNhgPUA *ƺe}+d?:۟?o(3כO~~+Οp&r$''xOo~٣kT߬k]6Mz:;YC^ toev|ɭ_o^mwrl-w&Z .c'ξd,źҨ X>Vzd=>6^zUb }RFhMe#>ſs)|̘JY`!%sxt Nj%/4F<0#RT p&BlH}>BbO8 7F'rrI6r:8Ԡ!?*&!L\G`fUs^X(|dΟ+_*vfXw9FٻFn$W~٘XW 7'۱y;iђ[M"7Q,J$KxIէ*P@f@F #S"I0,KۙϚk?LU1JVts2]VDE^%=h~{R-&vʃһc4x71{]8׶ڿv5x{i-q8h_]4iub4jqF) Nmv#f7'2dJG_ )qA J uS\H}@SaYk,!*AsDK5,j25)͂~%nuC qSJn:9L}g?2jf-"֖|Ƭ#u Ψ#M;eJKG2[ɾ#!ޅƴѢL??,jÿN)~́~` hA (yC (!mQ)i? %4\a){s?<ȼb|j%}>-V{s/r zGs0:`U Ff*KG AK/3%JϽoX6YSUTWVeiBʦ"ETN'bVi)gF(V))W܇e0Mg{)U'g4-e;Yfͽp֭nFOnSM̫l,u,y r+e\-/)x}5J)F)+ȨN=Vafvo^&[P"z4E9'E[W/{}4\i87p;v=GAHK[7Q& ƟiRe侷hv-7hwFEʄ9we`Υɸ;wewo$5PG~sܨRVw=-^i{btO1oQ-p-Gݢ'}6gPFKװ0 J$W JUʒ;g-H 0gL`!*19Ur$;! @?&怮/cBr;^>Vc0]@Ěզ{?XD&YbhE< YNLbZa!{$0x_3)p݇Bt"88 uhGDMFG Rk9b%0 fuJBո$ ӷ̺-p(`hۤjkvW}tcY'L^! w7=.42ybS+pɂv A,Yt2 Γn(rY61y.USp$R ,߱suoIDMi>N֟ t @ 3m1ߛKjQAw㞧W ^ g,Pwv/ve0'Ƽ݈%o<%7n(dک HD 7]y`(N U Zjcs@#1^39S!0KAhm\Yo䫃ǧ$`^_q ^\ &??Hl6>s{@}TĂF mT+(_.CBBP`q|y[[aUB7@GQԟnvnkmY[*“Uʥ]D!:R>U꽵*Gix>j1QLhP!K礜I\F j& ]mr}׫ξ nW:v>e ֤ 5mqDD*& eƍu>Wv}3Y_+5,כr[6G;=; Mkn/?hCl?Lr'׉\Hm09ߠQ_m]U4ILPg SO: dPWNפg{/<0ĸY_D!Bv&FncV6UK5w]h u],a'PV֢Yv'>S[Q=w\{8XT& lz4tSңVSTf)Eߣ hҴ+s>_د5Bl6x ԍ.+rp-,r'_.eL)3ctVY,ct(X;>xVzqUYO,sH (K#W*qY`39rу=TVdMgǺCS J:Rx*ܼ}Ij!s<Ǯ-NW}ʎn4ɷLѯv4iI> X)몝V?v3ŕo&vQKw`N5̈BM4:xa#\Kfތ؛OԌs!j$X&ШR(G1Ζ9oOIOY˗|k0%D+7˜jo2{VhѢb?O."WFwpVr@C#P9sS aȘ\!bTL烩ЃIGsH:4 c].'ϭB#=1UWj+q+bIkYΖz#&#jbjZWF0/ ˕r~.reœz\4}[g`1Z$JY:ZK*rz|HeX"23C lEGcM"ZF\1FR1I0Bxfy{~ٚ b38UcDT"AwAB{Ve }C,P xCEE){_DZDŧM#ۈL4:U*PmC (p:8 fR5Ffy?^d~ĸ8[1.\ZBV\$@iL.ggV \| \<h~O#Ȓx2esjw0{PxcN-@(y-t1zk O<Lh"WJ6s2,I? HZ3JwQr^EjMlMgP.-Zיc͝'?6%.V?n[jCQڋykwpˋ꒻}EPEAIVbfU,(9^y;tqr;seʖ+ڦe`d(*׷8@  z0ѣBmn೏ >@ԖT@K`Eͯj(+8#2>76ӈ 7Hs-M;9%)C1߿MJaBB2I$%Z%jl֖PIA*AȔ_F` 4of~.4&' u.k 2Ɛ׻1ov[]3jWY]ofRۤN?o7jrW5j]i7.~z_1O_x:b *47Lg8Ȇ@u%B]j-u< Y~OOggyűhERG2D!2\$a^?]DO͢*AyzHxu~RN?OЃHFncWo_iXd:;C.խqZuPk[^'.j/4ޛ=-O5wQs~7^%f_Mhj2i^̟_hz9M5N Ħ8#44 hYiS^TO]/O=2fզGUb˛4{FUz^Hذ/]{;ˣ2q.rz_]TK墵OGq|u'7?O?ֿͿo7W?ɱ8S7M`1>p_ chjho94k.Cc˻^<ժ-ow87fq 4e<5&r-_VMǕpUQZ O^p\\_1 U|zAk݀pd3?7^B753BG^6|Rnn#%EEEXC,&xd]wNg @JQg Ά%u|>EÉK's{6MuQxb}"m.ҰdOgonN%DL|۔Lu`J|ņ7]xЪ!la؎1lǟ ˣk&$^u$^E)ɹ,PJr=Jr$m;5c[E_?!9K"}2J%oa: cJ gbld&9R(^2*hSpąl穈b }i  `Q[("NZWh5ZFB'hfKYW0>C-jmbk`oYĞ9yD/>o"+ABЕ~ աR]g^H-ə:șR[E'+ b.9oJK$ XS ABOXDHms+Hde5OIrFFF~$U0SuV١>br2N3p݄fkFY" yTҖ+H&/AE魗ų-vi`_SASNGSlAt0b@$s?Ѳ OnFո{D#dC|s@0x1`RXBQ $(' ] xU$] oD<,֫SMlH&nVe'R23D)'$T-*X#$ў =]yP艦L~ߺvmb;wq܁ bdx(K d,?` --w%]!iNyEyʂ_c@jRZ$ߑ+*Hw:6pU0+fYK3s-wyJ{w|tHG?y,ǸG3K A#JPB/|2Ȣ);P]$t I wMYvRV{J/:KIʣZ_}ӾuSF9݇fɦW?7^~nHGs_x]ͼ73tSK] [r6~ky.K`|2Jtnͺudj.(OBAR(7(@@݉e9(^I{΁ā6ڬQ"BL YGfKChdWX@6Qd+aܛUl7X2n9zxq].B8ӣdhLh"8P#QP#x U0XM/Z3 B>)}()*70ÅVVr=Jkp0 GLn7=N.:Ӳ9dLK9_ioԀlb@3nw(,vHY)7i [/TD"0ֹvYBF->v蝅(-YNe$mcJrIj*/} 5A"ܐ4SԪ!v75&fre#nh|3ôg+plhe4S>k .UQ84š+#Ҋ7$^]>'OeB. gMwNIwՋ׋S X=f #7(NR!$Ls瀑Z]A6N!-yoJʪ'% Ҋ$#n'x ^)\_iREVBT-Z_9k^"W5FP* %Rʡf\,RyBQgI(00{7}yũ+ z7yيjezR& yRX^5mvh` knx͛A]o"Y2N:]wZH o(<n>+"1'w1ۤP֞\ U;Po]m| 'e(I}kRڤ)AM$ Xoxt \5:|~4_sx{tuU9Kcȟ=5NP>RzIbz>VE>l$)l6Q'V* F'I6En@gО (; $>bL8K8 tu9xlC{_Nԏɘ{穞6hEr/trL^{tQ!bKUɾR2%i|"4G* 1a#W"{ҵy4&m23HR6}Qb~N))!l棵Ì|x\Ukpglb}Ìh?3DWU&YeEH&XVdJ*HYbT`|@a2"`6sy}szސ*9i<#8-)_V* ?U׼B%^Cbt=I- p9^!Y/5xA$ (zPk<+>2Q3+7ls2J+}jAgw-C%CMj/RڲcR  8KnVc_ns`$xΛwmw=6_0~M-شX`EXYr-{ߋPoIXI~9&yxfϐ3`V,YDYwtU-2{$2Úe>7mj<&GiQf;]]vz{Oz<|~_LG@X 6h`KlmnOy=R..Y`qぇBw=kxjmi)7 NgM0r:ZE&,;u)_~Ez'Jx!:+%%69x-(g=#b\q!0KA(r61k0d_޹fͽ _9|p>[s}C/Հ.i0kR2Z#QFgyFt:gBsYF˸VZ42^=Lĥy _ކ(άOѤqpsƃ+,7&4L'I>ۇK}߫_Ǔq+cǢ$iArw{HֽzMw>$dןhп&}_ׇד#3rw\aƕH!Ĺ <;AzVIr'g&v~L.ᏯE:ECA:ح#LSj/!XZ}4ɥΑFg Q,29 r' K};'9 |Dlb,6jhl&&<ՆCtLDf]g=eRgae,aONoaԭs%vkqle@xk]>|8lN_3klx?_)lGOk2ڥ 2}f,ԎβOY3 XX{\>͍os.ֳF\#HnZXⰯ{n7)"Q2`-?RIeF}.h]hA+JIY?T܅3Ժ;o}݀}轲!.Ť;;Tw#g6)SԶVn50 Qqx˄h.EFS3SSt9pAFZ^ SƼ9CR usWg B];{}tņ$UIؠdVsb:tBeR(LtAW`:Y5u]"cɪ@ð'399n*hHr`Wv ~g}z\vsφݵVFצ}W_6^h#L `A*.7=igٍQd sv,{ہs~uA k&͡Ul-Gjg/='/=x0^zf{gґYisBQaj1;KQjRFLl9IDLP2F&3,Wn0c9kRL@ep藟br"!DvBlWrN(6]>:fjr/9fv͞Y-jw(ZFҜ-*/$VBگtY#s1lxP)HCĺM5@H 021[/Y38G͵t+pTmXm8-f"de Wlmӫ8ƫSC 6cv,O΍Fpz[lkUJJ³)BH0ɨh3%6D[BṲDRL0齐MMPNddr,&:Y0XbWv'\ծ&Zm^jkV!90e=[YIf"s"&n4iyɊ `.6+m\8LȐyA&C&&Hu.FǬ"#ɸGՆΜ%3q1bǾQTYf7[8%U Jd}tYU> |Lf9zFWE hHn6\F";I'͍LH8# SheXm8-YW.NǴYMJ].6ћ$i E!MT@J;fR3H-lvvXa5pO1Q8xST\ |pH|V셐eh:=Fj>ڨsVAW bڹj5Wr}JDK;d:eȜ9WNi-+^r AǒafL(*BJ傴IJ$i3p!k[D Yxoaa2׮xUm81YB h3Jn~9o85P،:I$1_bTK+H@ݜ`'+VDFwD43lIvPl$ s,~Aw&p?qCBd dyNR$HK jhAaY`!$D&R=SbP(cK C?w`K:&%Gcy{Նcʚ4i0t@e+\^ލ.6,KFj=^UWW`0JҺCT Nʕz}H7ȶ-]]OKrƃe{=:.R@  9AaflAFKE9 )Bҥv(T}  4NsƮe%q)xieE#HQ%愐ilNK_LsN1^/H̲lK55az;M?ҴӹOT`xR(U5lp:I3QHuil$H/vLQ;`+_J'3WnqIDI 8&8!N:=dۀ.̈?/[qŽWTd MI.rG HsV٣]߯i8+aɗo`ԼVmpw淗5uEg:[|z}4%ɋq ߝVڻ]'㟯fr_&wz2'Pt׍$Bo76dfh,U|2.'zz|xp~eomu}=Fvl৥_&4}=MIqO`25*9Ш&up<<Ë3?9z~|}~\4W)jԽC0ߟ5!1OZ[vMZKXCR|~QWΜaX-3P>nAjrrjG<,qаI ,/Fe ߍ*Ǚ A[9H?>dq~_)^F>x{+r)B\"du><{T"A12Y/5gp6K|Oٰ4yL{7#0rn1rf#)9AFBV!έ\n˂ޯog:UtXĞ He{~`SJ(׈F}EH;ktQ8Eh j,bOtYz2IhFԭ@LK<@JnsG&'~ ڨF:Z!T<'S{KXW=|]10řAiP#Y農ސi6NMh1a-%|*, rЂ!K. d"n"r0a J*ziwS̋J?{׶FdE [Ah`X`aAF^,iRMb}#ɢDJ*\6ld1UuNd '|/NxeڭoCKV?{s$裔W=\Υxg+Qʾw\ifKBV 1ac!F PRTZJG&?>÷7]G۞jnʛ6A\ۍ7:hThOMYdsgݳeNjHT,ucZU AxtVGQl1=@*mLgqtS.dD{ݘkeMP7$)*AΎwf8ʊ}\fުc[yp,n,1F;=QglR]В9 '(wvzs=8͑Zcs- )VV5'ZZ'f特x,GtF)|dv6}(ʳ0K+UQ+FESʡ@rThUlC>  =ѩuTn^-o3[{Э̹I} -ޫuGO+6XN&t1ۤLhJ 輌h3GmX$ci- $M(}mH_66pEDNJ勝'T;n>^k-Ͻv ooH>j^ߏvnQKMLkd5%jJ>1xՁ\"{ts'%dL mMHقf>N )9+,1k&5?n3p~R*xCq,zW!dԹViST'E SZL}D<"T_nݸ }`8B`@%u{A(2L FrTV6|aZI5M&́'"dQ5LD"tx g/*Foc ΊBVÏp55_*% (}[n"O!߿!AǛ#Gπ8Oe"]&DR`LD6W02~w6ORd&t.C5svWp`hQE]3+n X#/unz8Lj 㒼6Rτ (>07}1UȢPHsY}*[#}y-GvgzIWנ;:S1ͧ6\ ;MhD,j [GÆ-c.s3;vF6)Υ\}c`|Lm#ڿMd:DuG!;݂8Rr_|xfdMH:рD*jF `1SvZ> NҺ&C5;Ju@obӾ g]TP$F/1 PD:0_gЫfo>]?1mva~FRtV\i'[ ?^3ixzk/'!.#ߝae1fOG~z3-M@ 6Wzl]oë?>Ib6_gXwova*ͽ:f]ޝz`‡7 |7gur#}Z/~rAl{I_?_߶Bv?~.M~yB |}Дj(OB{G ½կ3Ŭ{~] uOGp^$iLE-=Xm9fȺ(*d U twY[ӄ4a0-Q lN.?{f,s!l:E1 ˵t$fI䂱2jP X(ɑ*gw8⓮V 6JY6g 'A/~Z_j/l i]6ؼO2YdMM]< l b\3Y2Y;K,]oc퓤ؚ!ΠLL iPf L:eӥH $QHk;( $U yI˼ p`TQ|7|\6<]{}qiǟ:oEj'=q|NXLzw}1,"2K/,j\e t_$;<{jjg |6PƜK#jZu;lJJL1ttalyS:`l>C}0 e0USH캼.\GK襵%Y*ĖdI.C){5cRL*UɔQDBǂ!RF)y @ݍH D1XMhHU OvfKtJ{ۻlg.h;nR/ 0'x> VRwvKk=;Uj.Lrb`$h8%m]؈>|puDASl$`"DfhRʅٸf($b o"Csٚ%HKJTEH(IB!0vfΎšmwx>z=ZZy0}L][+u"0 j [Ƭ}n} >f:&] ^ҭhL^Z\ o' Kx?fvt"\,^a7]#܅|#7x40"$wV|~4,۝~izo-ӌ8Y/׸v<6J=gӻt[/xI}.65Ջ̭GF yuq6}n Nd0mu7e6";u{~ܝH"wVI@k'yFS(LS m^gZPmLںj2G2 n.fahul ?|m$/ѓJ U@+eH rX7y2RQBEtG6Zg4BYRQtN)-T% ̜3x%zA޽wgjN1yE,,0Ы[6=Pi'cxd_T5$krmn[oy?D ۙ|:- o>@SrFX202$5hP*ʐ ݖjP1%iyRx?a)堢YZZ2>v]ە9ۧh~?V}2ۙ쫚ND57_ sVkF³7 53 Ͳ(%`f##DN# (Dj\XyIš'Gq )&ɗG+-)Tב3sG((3+6o.b 0S]c%$,/& σm$luE~:C"$e[_ IQ)QHl3ksl`J{Jb h8D0 qIɊW'fpS5O?12Ry # *GI L#x;؝s3vaXR.wkwrm%^p"x筵2IӈIp,D4HaA pʁź`zԐQ2ZQ! x"gȣu!@PQٮ]ȝs3?lyקw#vrD1Gd#xm',n)' Il`K31Dy A}2BhԍQ$( X|5D%U#vӇ[/N)4d K勲c _,|T_0t%B­$!`\$\i,/|>ΰc[~:[aptūuEŕpG D!ȟ %{"wQD<`=cz;ZUxjnilDP 4{#@$aN%V[HL(Em18(\8HV;ΑKȄQxʹ Mе`8F .pEm^kH,c<62Yf%wJqFZA渭.G"aC$ OΆ`CM%]h hbBC \JɡvVF.]",0U[B*TRȈaQvAZ Q'KSJ98Dm\m:dBl8~MWlqem>A8~-, $>SZSd=a>AnTMQfvQNljViZ'B q$|&S@6F ퟐJrN߷I<F]uaNlZ?կW͍ /+Z_sl?i][nv8R GßOM˭~3 dH)]5aغakX;2A q8j`^|BO'uGOrݨ*5u-qN#y`WNb5JUf=OQkzW^)qm]D|㫗W_9L7ug~+CϷZv547bu-r׌|V͸2#! U?~~> ?'>=#;ZOMԈYz0d~ܶLO*?{E.A!CCB4{Y/4Bxh޿2{n/D\]&n#~ÿq)@s$T+OD c-^49-(r'aѰ$I#=azaS֩q@!l"9a@<F&*=U&O$NOOv%D32=nS2ٖ)w|]CymΥI8bU y#t_, 'fCd6{,ډ.J  F@k-kiBTJ}5Y5։aOrkQP<w.p(eUv%Q$fve(vY B.wT4 L$}b)*"<f99t}C%UsMgIi?t6I"]MDYUᣈ%b*BD< =N.ᎏGG5J9~Cj}-O{ֻOo?Nkƃ%eݓF"95Xj,%+X)/Qa~{Vy!ǓtP-Q8}{2?H"/Dx]q]KjsJۈ8"\Tfw~iC7j~jG3W?'bpT/^T6N E>{_!6VgTq>sLjzJ$zޚ]SgK.a @De~KvN{9YI\mz^'ID,1Q' TĠuDG"q1"ABhC趞'MUJpDp&YBB$AJ&#%"HݡȨf2ie%UZ s4Rbrf2涷 s´m;;P/pGbS9::%b iM宻"PeD=>$[ծ&]o4"7-adJi!*E5ϔEt)8oTh!i,]rŽݴM{7 \!m-b6Wgq<8,́ dȨ JU{W=Hx"?nNv`FֹvlC{x16lE -< usPՉEV*Qj0ܮEol\_Gҭ{^*ܦFm1&:FZS 1PKwQ߼3&H|2 Yݭ;ٮCZ_>sKsʺX 9Ӈ8\>#>b_U?9ekַ{vx4}Iʰʽ9 ɼ;a/jIbn7jmE79JHr.佉TsA26-*8٨ЁodʹΒ*q$w'=q`ujcW~㚾#G6sf&Ke6*RrDΩ#{!yCA:xBOr2mZJJfN79xkb픮x=A<ܳa}O=̌:n`# >HdFG&!3x`X~IP&sPBYؐjF)xuV"SOעn40]vtIv 'foZ>lHVCEMҺ)W/p۵&R-g0E#BI=I=)X@\i^E n9o *G6Y$F<ǹ3sU!bjBn"Τ&b% Ms yxqLjWxs#lf}6ł9kZ_yMFvE<\kbP![iDQIHT(_!ڀk`\P6&0 %b!Nw&Ggi9{~E 3x()zB-7*މ^/dϑ(\ 0 I ЅAT#R LV.p_|熢"GϪr_fD|jL *J bTmA|1Qf$Y$-)=JG17tQ=JGI(%ݣ{t<~!{tQ=JGI(%ݣ{tQ=JWiJσNAV8i櫉7kø: .B8S5Ƶ.Dh..9pɃ0p$bb/L3BtǜPN(ehe+|jr?9fzg}|ea:(%t-LĽHB(P|.(ۉ91Tv}9G7˧x}Tn޲2ldɾ"S J6[JŅՒ ,kO>6"-+֗xBr_żz ݺ mAjf{mQ;niaQg)ơb'knh utձЏ_׽/T]wY떩tTBwFzb8=fn0٬%Xi4Lzc{Rr.:HP x1:3:1$;(N~ eʙ݃* *Q.Yl)D>om_QГ9?\fF}G*6lcvwW@7,~u;tÿNZu&K*Mr!W0(q,PPA.NjrJx~+)"tm,bEےi'gؚ"8ZECQZPc4ahcc8 R(G[υ tUgן^2v/bpnՖoxR볮&Gr^({:VS): Ig417>2Ock"ϯy X d*h UU+&r1U m'`&Pep8=iX< sqE: 9r HI+x@*FmB*,1kO*%4y}7e2h~ϡ= 3@Qofw('FvK=xsI_B a\Gta X"Y]h,fZ'mΆPJx0bge18Lt=^qtwK^|ݤ9Ջ6z|HâTNkb5so7W]uT>&·-7h}жbrN>yWlq0 ێ./c/|G=ׄ/ŗD P\25EoxA+ V մ'.ƴ)Pz:lP5W󫻿۝]")Tk񖫦M9;rA%ln;nm^F`Ezokkզo26 Z&EfYt GF?Fg*?}Ӡ(\_&3sr=nom.(a޽-[ ʢykrg rf7l}`V)ՠCtFb0*<fUجc2I7T|X֝L.R鮇ؿ Ź:# V0ĂtHN<1.jkFDS(9Y]1e=T2TH B8sgi9lҘ`CI;*ZU;uުYxN5ݘ^ata8w"sޒ6ʟZ>uH = " Ndj¢'9ia{T%*.cUV5*l R))FrM`A{ C)\jPaDM61vhSU5kɲ1a7sxZgoO[Yjvy?2UrbԶZ=;~T%{z.zs,|o\W:zKVA0ri+͐fp 5bNaEZv:ca[w5{]j+?9sxq&q=r=^'N=BX.ɷѺahnC[o/G:T*?oǗ8q' >en7'n@.cƋޅ_={@)Prs*VT@2 N}c1eh)DR&z1xmg8chj R钹 Sn ZeتQ] }=)rAgQ"քcMtk͹T"F6&P\1fLBN'P#F&_Hk/H0B X1u@ũjRك߃_Y7^0cg_s_+*WL}%jxחl\ !\އhLsF\u`+\N5r!8f%GҟPT& c`>eUA[SKp:UI6ot Ӵ{3xKS Hd }9#s̔|+gm6@ +p Eb1Ybq2 M5:cڋT硢V[.lNEq`,XD}CPz2gO4NeFcH'KԖh[,BIit2ɪֶ |՘`k5Z)*:YdM1$pH)ձ\)aku,起58zGn쏌JXmd $BbsD4SD"%XOʒv%*#5$s!Xkk1N2x('XT}6gj31DŽu86.XڥA.^/sG":J\\ljbƴm}h8ŝ1 + ! rƆ 9jW-)Xaql< 5'a b[}_@+ n~|G#P͝Lʜi3לi֛ JI70mJALTzV6zs59S?uHކZ\K vupI!P 4׿_EX,/'X.f ;9A{ ,㇋}ޗeۣ_=_Ϸfms~X7;nRIs:ٝ|_??4Xv|xv~C ] =*T~bLbMLnGe0& ̂S.pHU]"bjդ[eck^#lb@UP*4XmNUboEꝃdTQ"jb w>d/sHE_Dץmb*\ mu,?×I Zɰ~|L fbwsX,m9n6!"vyJ(fոQTA!^P33xܘBCB l6N!GM@֔PܐybxTmHd!!* Yo.1Y@!h.EuAnEF(\ P)-@fRwO7shx#VYrR mڦD9|C?:,fvU&Kgg*WC Z$ e^)\WTǢQo>Q9ßuyaH&*bx=Ŷ>'7$2ڱHˆ*taqFQPOq՞q[^潝UDyPcR$W6m -1AyXch=Iޒif/4jj:(_;WX9J8@}6ʁ(.-d;_5X&LבR@0Xcɞ" lکjJF(.L\^Ϫ6kHhK3X]p*ſYzXgi볶w~O][/'l>s}y'\qYEù^~jWV>e-h`F!mAF|<̇Sj>bQٷ; _> cUa'J<.jNJȥGAH*`y\D޵q$2KlFE*up Nξ!b, Iٖz.I&j( qY4{o`z6z8('|Ż40> }*S'M64)ZEꥧ'\AӜr<)_۪ Ygj3bөŸװO\p-Js#LTɵ,\TO^^^O.fb8+N2{x{Q-շk;דtq/CChH#]QV3,oAh} +V1M1{Wɥ+G%h$WjB?2>\zA̒88}SњNeg~8s7_UWo1Qg8;%8 LKV L¯"pg|ESCx  u[KIS^1)olMB(}|e*NWlvuo.P%A!R|By S=/@/_uIU:OB,:0>G~Ov33g쭮\KdK!s9fkvd(Z1ךǜAr7Ĕ2,[gVi5fbtZ)e.r愎=h㕕 K'( lE&^qYV(S{K{}0ܭ;k6^&b#LrJ͙"9AZaLv8j4gUkp^Z-`k U c뀰 ߜ <ԣ^sXejͤV[v_A~fg?//Gs6:[v'N=O8 iͽG_ *gRbr]oAs߁:;:l( .9WL.2d)BL̜Pĵ FRd WZ P+lN}d7zKn>&L”p V,9 Q 5ګ1r6_8)mMBrzxt^aKˊ*Mw_=vҰ1?bbc $Ԣ`^aw2X͌ hFBom]{F5ЊM6Dhtʠ!hq3b̛%oȾTC`ybV< f Ku&;U[kZ!MP!xc!34oNTsDHŔNZ$c)]jI_=벵[e\)1;:wO] , Ƙcd4 8YGHʵ,H Tu^p L 7lo.#<[iR14{T5zNsX:ٯ/P sZDKr`2V-D[8ͼX)Sr%G:Xt3Q!]Ȱ8p4zB9DŽRn&x92J#"z;X+`xk%OƦ5FfբjӭWjN<g7~&wғ| }qpeGKYoZ v牠(->$y8٪dJ{7 |8&΋R_/Ծb}];ЭF8PT鸱Xjס!?>fCc e 5%a4UTsc%bD^I[A?"bfŒ)wy-΄yΚ7a MF7'Żq&w'ę78s̅,7"a6bm:e?>U%ֆ̲P֢P֊NՃ.O$6DgTV2fct4D{Sܨt z̝vXTʽ3øaFhNPƆٍj+rRVyo6րlf> eg2CrW{~]w:]b%l0%|1^}"*)\d;~H(rf94H2vuO6{rb\$#֯A Ic"vAz%k,!5tnAiWN-56ًDk(6r"{f>o5 G-GU=u T:6~;zLw%qEf@5,74 gH" ]3*u8rՖy)US.dtVǸ#1b!Kp.TQk't }} 98xg~ork}]ʵcWk* X; R:og(T"Ղxk1Sz;*ā.ar=|[H,Y\FZ,ƋwnQ_ +rĮoL 6rkvt\O%xu}HA]wѯ F>1[d/=},8z6,{'/ף0r9~FTyjD\ D4J{7BΊ-{iۋ 2*>(B}^Ђ6B³eZ5mZݶ<<]P'xv6B7Tv^ٛ/ VO~~/~ ?.) 1+=#fEP{]k3֝gz4%u8|<*8Q+cW>s1s,`KT]3͒B,)ϒQ`cH{ Za(as"QTA;p;(pQC\`B<UXy S띱Vc&`{0 1hFs+%aC1r ~磛ʊo) o6~k+tN;۽jŮXQ-F?{FJ/aƱPRT`+w[ׅǖfoj=5g^秋SNFƌxhb$6dՖ1eY ιymT"9194DG#)::W]aM4s$9Sr?ZG x5#ޠ|ՇW<7N{^rEZ7(O&a s )]$D͓A@}S,F&MJow딨bVM->F%1m Dؤ"ZBq;wqaK%-VLP m;Mb`"|L+1bwF݈ƣ`b jw6Q`[8|h:&M# z2V' IJ Qc-)JGVMB*BF͐ZўGX"E=6u!@PQٮ;#nbq?TG٨Zv-Xۗ+gE8\eKOjA](6JP %&2pԐ; D锨ăCДlU`T6EIG^G(j0T(b1U )>s\Bm Wh|>j26߁-|yܢ-;uko̞\)ұ,5 \;-dݨsq b*-ݕս_{UT="k'\P /8T-~-#YycaW?[VW¬%~WQ휟]8A.6ѢG w6 |,Ђ8 UN$'yT!DŽQB9a'dPI)v2ne{2E<ގCS6f$rJP(o&Wk/8ndi&Rq>?m8B3}SR D'iۛ}Z{ݙ\ڎ)`KMo&g5AM`޵G9B}c><]ԏX~Xx}6=j./*f׈8>'~ppxT-7c)F/g*ho>8BZC*nl -5ښEfyBh\`żG/\98L;*Z[eVglkR30V'y.d$7,}I,ƩȰCi17~K9%T"O|_6շ ?89Bv^?~2}w?i9`~{=kM=]5͚FXie_]κrK7v+cvKn%@,~9z &>gUy_bDx ףPWlz?d0)|󊊪P/ǡ> Gj'þ{6/zj]&e#-R_ G T+OD c-^49-Ѩr'aѰ$ԳIFzjg Jz5/DK54jt yφ)Py'BpByDp~]ɞ$NN%3X,3Zdp6ʭaW:oP(2ą^DM]UEk԰|^=6bnz*&i|'Ql{8<)bqZ~uFgL !Arzz2LkbB: VFK3OH¾kԾH`mq'ADJι+YRɈ16 433&I6A +NBikͬ@ x,PpSa?󢗭a^GH;M&Ƃ,A`J--kiB'YN *?e)!x 2^|$DtDBP )$ r~Q>=J&H3C#Qks gJ*mqDB%g-9~2OD<. i zd:uћcKŹ,,e,ɂJAd,r&6#tOp5fS e\;\e*%•֔M+$2)pb*SDW/D滨r<8HT|?>w +$Ϧ7hFJŬ(Tm~ 8"Wfя;;HD%K,%(YJV(RX]^y0 n8*OOiOor4_zm.s[3oL0GiÿKq+vb֑,t!] (AnNp>}v⢭|TqKG'z2_z[gut]9i>Cp3G@ Px 63)Hmho÷)v1~NfY,jW$+AkV1IRB/|J }ZPJ7HSerH-#d5LBVmƖzMOP2.~*pu?j <\݇JC| kWpzZu$Z\ʜp}ۓKj~U8Tc)ۺf !|!RP*N#8e"PG=[-3;0o,O۫1. J䙈lqIE:ϹhL&ϘgCsf}i.Bx|͆GX[Mn] b9 : Q3&NIawIl4L䀫|~D\ŐTqFU,D4m=<~{8;9B.=/>42rۡ 7a>>`97 '3fUz}V_ZO8Uwߣ>B+#V$Y22*9TZ`Ö^xy" 6 a^ zY3M]&+ *j& ";rfjXw*Ri(]/хj4]_.cqya岩ض[/$TI)gAAĮໄ_*|WGs|GT2Ktޘ$fFHzcP:aSP:S+~+S)u/9Ud!p YU);UIfKY -]{ L)וetVs¶sSZx ŷ^-\95Ѝy9 G2(7nl66A4T UB]nܾz_cȼ^w[o1O<E] ZU~<3ΩV@2y]o<Q, xpTmIӠe*&YE@+8PIT_1TApk\Š6n4btPJt_%igTLqō& |b|gZHr_qwcL }ވL2}HX4lɇD,RѪvD[&KY*|+-l2JK)kҊ r HO5Hy[E򶩂V9x'a!Y[s 3{}Ύs~׹֫ks;o30_fWފf?dR%ұ-~EaH.Z0+-eCl`VKHI\ءZBÁ2lHc%(9EP%hbkf7GԵ$_s5LaN;^;h-EQ0zv֦)_'f2gw:,nv>> vko[J-槎 `ΕtGZ5i*M!av; `fTwlGlj萬( b.94KKfYRJ[N Ǟ~(5C_E62Tc:RIr^v,JބlR8>s>̉[xֈBHQ`P!*bL",AEeA+\4lAK鏥d%Ż`'X dZ&!Qu:E&hl̫ǞmAGl")J_BHW&#|$PPIB`[Jy]u軑0 ,m'R27Reg lTI0aX檣K~r[ mۂmO wJzB5Ob0i(gْ<8;&:P㓛'xd-׋}6Ve-q!?*PZd!{)Д< BYӊ RԱxАt)@:^>1V61Iie^+fs)_\ye\qug|h/7yw]~lgers?B[e6 ~?F8|}`Y]dwݐ?g_lNH !i맣oh~{#TY)24H7N77K7ʼ'H$.K`D:N{tVGQl15@*POi&wdqfNUs~j5nG:SF3y;*RH\iXyC5A˄:\42xv˫9w?vo؍f woR>blbǽoQ)m,HFZ8beT$!9 A~DEpZ# h ]6)/Fj]J 輌h3Gm[%(LMw4mxUK) o9Y(b3qť_QR?_ oWhz6.vݍv&~B%uy穟 ` D'0 bNx{m[1XoЗBy>Vsj Z_\ȆAX=h҅ (3F3NS(!"2:ޑ'eVKv'~k}\iwtQ:n7p|{?ko#,zW!d;FSq* +9E SZLZ#4Uj<81c@%AUYԠ H^&!3rȉd.E"JË&/~'Ot=okی|ou/}#v/|!;Z&㉮&YeB$,PJ*HYbT`փ·(WOs3Lt=/pՀ1ڗ܍vάw4.l6okO;|Obezs}m.^U¿OI^S f0Ko@7٨H4 t((~<7o+d[z%5:>\u SxkN›Ϡzv}ya]݆CrWp%ys(9|]*sв;CZv׭ZaIn.;dy_ ,cq4O|ͥXcYynuBxسrOu% RZs*2u{~š{OyK|jϏ8[Ҋg|wz/}Ds˸l~uD8m}lϚǜJW p;[''!;7hy~~$ݳ3u-KԸ*<8*]pċumb- BƱiESnig dMH:рD*jG6!:FQzTX_פ-IOTNP|4)5x/;Ɠݟe~9D芥-"1zG)EmAHjN hupwӫ,}}kmXq`6v?9#1zG4ąU} ֻ;JǶҡ}ڈ#uU\F]VR0#2]{փW#)vVtz xYFEsU#UJ`;'pz߅`SAlų3$C 8Y='zu#~qͻ5Wz׺w+7u*k!TLJ x@ PNFC0ZvOrs :}xoq̷,6@;d\)"As.+8 d\XZzqa)qȸ|Ȍ2F2.F!-#\kg6r޵`IVB8Ҏ(3)Y! 'XfyeI?R.D 86f"DXYrVV|jl2|ٚ%HKJTEH(Irm\LS';]vo= A]Q0z A| T $5+dP*ʐ M 5kP GZ)l)?T1kb#q x%9ccCm&~Cє<aZ2>[zR4>ތ\ɲ_bna.K4/T`%+@ \(׬eQA)J hmrLJxm9w,(MMIŮ':lťRɗۣFNgl&~44(_XM Oj$w":ŗvUc^Yupuus5~?o챵rҚpaM6{'aW $,에"Y k14qy:d/L Rdur + Eim|cLn&i-nzmk?7xvh<:1̆^0b@$JY:-[y.UF#,T[[u!7RsPj#dǾ&eG>bJ$gBcLpԯcGlzD##qg3 MYjg)b4*.*J %%#cr:ͱ1 XVJ6:f3AiI+Gk᭭/3?jb8[)LK_/~q<l  Ii.5,*Q$fgXa//6ӎckd> Ú`{gת(ߨXgՏs%s~Ec} e^܏M*glP^Sw\w Ϋ"'[ vmOdY Zo/⦬,🵦PZhc_O51܆Ȋqvu=(_'mryGZEz)qs3=_,"|cU\=m\zKOj٣{4HlkR1 Gܖ;C8Nk@/m}{f^ߟuYW%_| A F&Ej $2ZSF@W2/{%믊aLi< #PAէO˿*?h]Rrabٳ&x˦y1ɔS,xB**3tP>qDČ6Z/b&Ic%R޴e9U ݀A1(.Xȗh唱N+"eKc- iήYED ⁵ka:`i*>vFiPI  tJD)+Eɣ*[IoR2qMA5ge")۟3EPζ4hNUgd x*WX)o*2%CdPd40J}Kc`P*fT_JR2 sYf@4YD@24Kd[D/@B]^j<`" P ˕3V40O[A yKc%@ȂA~T:ZR*5UUN%KUDE4cq4`h&|D`P*c*ʦ)I1δr*ٴ2@m3XEfr  .9f )I9nXUM[AJ {7E]% AK\$F^,.zFO(@H*+9ROQq@hil,@x@HtxeH WE7 J % @<7H749Zv׊~24cVGr|BTRX/- B 0- |8vty:2->[.CU015 ZǁmZI|4x 3Lf@mӁ'~ypҫhR0 R&V0iANj  (ЃcF<@ $9-+d^0@T. U 26VYQ8@=DhiADYR:[x nEf06}ƒwS$ 8U+ɉw%,ܶMxYI N;>mƇ`p b9TX] XXkp`Hc# m|܆;pi/pC$*f7]_GW* Fetj  L)c,C4Kp VhGc3yz5|< hwXEy3chdr[< t B2989viEpA[k9K&JI>2~2w1`DhZ|`5aE^Y` E*C DYl9i?efH%y`,X#4@W jPrVkZzˬ EaO^øL/es 1%YJD0umMW-d0L&Sn3a0`ؾ߯4,E4^[f IZkpyHY5X`4vfc\702o{C{of%<%2`rH|( Z$ʦY.1eBAt GRT", wB)Wf="ÚI:R,xzV$bʅs <@b&v)$ ^ѭ `\ةiBE HbQ<ȈOZ5u-E@2=Bf6,ѓZVl,2aF@jpYr@qXTI%4.p-c|em\U8/h\hMg TfD:8FD O(o:rfF8ae%3#f@D2ZbiDF:|%q;Sg 2 1]+ p)ϸVLK>)eq'ye|, yYnq}DNes<͞6y}馺~7..ˏIϱ4fӰPaIjQ/H*7'iyaeE/IQGM`G+\]X♪YƲ)`}|ıze"H0[O dlqOؤdoib~ޤ6rie(dphlnu3o47\=?]={4bu঻k\o+\9lk/po&+r~o>1>8 xt>zkW>^Ċ:EHE:m`6rʇ'@'\wZ_Dzlf79a>4~СGʴ?w̰GaIQMt )̭GttգzUQW=G]uգzUQW=G]uգzUQW=G]uգzUQW=G]uգzUQW=G]uգzUQW=G?UI+Q=ҿ#XkěW=Vwգ?wR*!7U>:~TS O) %d\4p3_4*]ţxQKYsG9 tpB$chhvЪ7Nmxh U 9ZTbWtј0\ Dט]ckLt15&Dט]ckLt15&Dט]ckLt15&Dט]ckLt15&Dט]ckLt15&Dט]ckLt?Ƅ)Ѽh$gskفo-.r=@$L9Tt,`m0ȾX؛)`~-LDj d 9}ť mXJHgIZV瘳R5laEf3sE7T)K'Ki@JA˯w7iǝ#TN_xDa3*<ǜ"'/l3CW-&AܙR'xŻe^~H~A,rAs>5 a3.8f'htVr/OVW'?a%(I+>+1mb֕u6O#-Ϲm[\,>S~v 2^1v/ۿm7F۫ R <^I:*z64"-]sXRcդBwT%*~#]J(rU%Z-T]tپx>ZE[Hy;n#yW(8"\bF$k)i$Q`Fڷaj|kap[ `?2U)fC!V.U'DN#1Q 8}F~Ċme u `m6؞i}i@G杛N1@i;a|D0I`&+P$i5n;5 H0/FPė6e*G3{iK}eyZ{Ne*T˜9%+3 !/VH6AŤ>xD*zmR;@UP`ҨYD#yܬ ]h&ijա2m`ʫy6i]kGA%ҙ#"sDd9"2GD#"sDd9"2GD#"sDd9"2GD#"sDd9"2GD#"sDd9"2GD#"sDd9"2G ∸Z[䣩TӥΥiBp.?UiJ^ִQz.ٔ+i9UCSG{cS0) `\ W^GH(Y&jk ;fsP=NJ"'AGg1EV)\nPGN .MBLoBj\2gzAX7ElBC)FDC6ubŠs G"6u4 Nm"F6`S5O@bax=*uBrQ+)Pa#"ȼ]Hj^0 i8b2Pfq,iq9r|aָOɚjƅO_¸k3`Ks*xϸ3ֲ+rap܁4iť{xB1-KgNu tb {i ,( }?tAabJHmHD0!:ʠ1u'f4 Wd4xuz X[lBqTaj^s뺋id"͠ЯOu`|'t[:bh*%RQ׳MWijYGDθ7:qCή8:aߵ }&yrOӳf[~yWxs3~p 6 setx9yT/s݌+^3~ &IkGb|HWuÐaUvA vAuG1Wq16.Z;*A:ɺQU(2I F%ұ0[#bKb6: +_?]t QsQsźu0utwo\.޼}8wo`88NjEO' @7/gm1hjho64MVMo0.%Mr͸kτ}2fnV ⷛ/o ga/LKS bלt D4 jগTB*w+y @zT Wׯ㰩ٖtپx>ZE[Hy( /X ++ase`!H^)Y)`EDT,T^x6!薘5wGE rBɞ9YO)w^Wm-lA/}7B&x!xc!34o4"mSnkЖL~kE6JYy1Ax w L݁cq׍:&˹`1X9b1i$UjY 4aޱ8UVͻs<3*n_mnpZϝ"tمfy"wy,_na&[chj^iGjd~sVk/;^sXek޶+u9s'ݷ2%7JP] b ԿU:IQ~rwW?y7?$"ǫjP#OwEfz 6 39 qo(a\~4WF`RGTRJ- BXdF@߇ iDx=JWv0eӣ]7N^M˛mFZBZR ̎TMa{c)li j=OT\ ۙLg3 WO ؆!rFf U`hi)1H-hmxqoܛw<&&Jdϵ 9ǩkA$&;̡("W09q ӘwKp3"'? SDc9S"!&",,xGI4 ״w1qN|nW\ޤekP|3frlW}{G7ޔx6>ppHbI-r1~(k<쮞^1!2-<mZbTyΝ~GҨ2WkŮxc9;#.5kܾ!4ЗÞR2|B0!s 'ZԚǒAJ7varY:Gljusu'L; 柲ZHTR&q]$yUdF8 ,u"J͚K=h(#" be}0z)h$TрG!edclXwm hmBO(9~&!]7ŵhf;ǒi3*uF["gNo2mRa^ #xa` F9Ꚗ l:;.[_յct६uf lvŚݸZrfuuڙҹ/*_q  S2Yi%E\ald5wΝ[y #é‰hK%<:рse:$~A4RNqR) 6X{,m0-0#4'(xF8Rׅt&1~>}!u95;{/xy*YWWR+RFꊤ"Kã(1TKe$FR Vfкu%G{˭̓t$ 4)cp%րcйQ*K$Ǒ#TmJmB;Gk;OJg= ܷNUQo J%yMQI5A$EKv)AG(1Ę vR65B!4Ί!+(MQ-MDhFYbH49X0C7`k}V7[oۤT4|& GS_ ywRMtpWteh9b{V&ɭ:A0sRN୓Vҹ喓~s4B_ j"% m"IJQin6 ?BR 4z>&ea*K@TEbUPQŨhIZ6B1 ے%['ꌐT8M]Xy&\NS^k dAjF1n{_׽4]>eU^i4dt4;)Te~3Ωa`t̙Swz<'7&}WӴgjǖqn/[D?ay:l3nyq6-Aܙ\[A^c-011~ߌaڷuVݮ]ݮﶩoWvӝŭvɶicv^oӽOX6 kݔ/A ikO)ylvq[n^=5w)ڲ^ %ؼ.-ltˊ|[@\&-=l< ;cԧk*oqKl*Ln'x?[gm^4AIƬ[v]CQc-m '\ms73x{#j۲M䇃 -Դދ:6a;mzd&fRwK{f Xwd){dDg"Jt"|AtK'΍\*S i4ڕJJ|ϤT&I*sH=9&oh)t%UZ  7fh=rKQq(xC-1}6#V㠄Gv#e鼚Zxcly 꺮ݳĢx 9YI.*B~(d)8)|us%,ƫѰ?q8dEl$2؊ +P~׃><<ϕDUQ :0/W'`EîA"V1+#W^w9nrrt7ʞߕ&*Ku3)CLQiiH*'TϜe*?d/^NAQ -DϥVj Q!,"tJT ̍BFt^@ {gH-!i,;5W+ϏD|3m;|=w3\fvob>-0."@}gPKwco4_!N8lTEj2XIBxQMחi2g/{(Kq?t4~}J'Nz N*/>%X:gEJh*WmHDgPȠlPx̩}נܑ)$!QaFavN[hx٩CPi()F6w/[ $ e9'lo@&1c Dڤ#ZB5>ԳR9d<ݵ夲E'jܜH=F ;'T&4OQBTTwsM 7(2*THN A 'z!ch}(*n)&XXNW)8c',\T@(ac-<}7Y3n]>{kv['֠#`*˩* b ƸQ q w |leE).El6.vg#9YMW"_jA'툁D*צ0b#g3b0,ǂŸcWԦ%Q{_ڝ[kh&&C# zkDbYTPci'G+&H*BF͐ =&Dx!Fօ$ADmK #g3FtRcAb+"ˆ:Dqm%Ж e*Xv| (lK3"@"ssAR XQ@ N!)MyЂ3.8oDTQQͣA=DX&4 FblFď!.N11tYKvEUy.Yp:*Q!%*" \GZ(wx\<<,.;f9 >kd0n\LޏuJ L`֓I`Tfj ?H%#K`&0FiNN`CU&SLWJ\}peHN2@N\EȩUcL%\}p03`ԈDt4>>R3츨wpnGozÔ|(zսnN"W6Ӷu^<WBe!Q׍18bsy{yJЬٛE~`8> HD;8?\cG5bu|Ʒ(x|F:π -_E3F9pʹ}4k?b=1nKrvS|=?ZUY'j$r6 ;%TL0A˖ GrT5ҋ3?(R WO#W煫QӨ4GVxL<DW.=lnt9 ުzڰrTyfZR`ҨܬbG~hGYӀysZQ٣7QF Ӕo?a`ɥKwkә羯x;Lg*6D 8% t*+ũUWFJW_\qK6輾5~k;dć5^mSct}&K.Σ߇K㯵g]xA+G,L֪ʨϬ`)zfeg8|.PT^;<"|m# 7}ꇿozw1wkrmޠ[~7TY*2(F4DMB'pt"Jf{n{M~PL^s!3t;0C޶kZq>noWsOvjg}.SQߠQ]WW{ @ 0Q4`xb rH<$6" Bٗ㋑}_3PdoOT/!`ÔX rm(l70@1jN.<H)_:k5JGɤׄ>GMF^\7~vg=%VΣ$ n V3}p4ʨ4U=Dr\ hg j2;,']nd'KJ)fS-wʃXXghX΢&Z$ {f2P(F>Ějv1jY n:.:fI[ٻFcW{ѨEgYQWGXI Ԓ f.@w**3ˬL@i,"=E'P\ å;_uPU/,&gZ@͐tU"ЈcQ"EA@"hV <+C!Vk3Yŭ$쵅u!MxRdUiEk%QqZ]N)A IZ[< _ji){"Z"@&RC \JA 9Y\DX`ni'"vAYA ="'+8M^ ё GsL0)w0b96ˇ%^5yuvi}x8V-܏'ގsc%?S{ʙП89 s7ǃ_qpnˀ3)# "{484)D ЂX*,qD |6?PPN%9:ӊs/ͽwP$Z\DgcΦ/'zr(, 78n#D˜fe}pcbC #t] ?śHD5mYִWirz!fd^xwz \$9@R4~yN.8=QC\ S~d_,#:}DEՏҨo>NlGA5Yնc;7Gwģl$uOwn#+*yQX)M5;I5U% q6^ٰPyE%=_p(q!04EreD0B(T[ίt:ә'cL{&s T^:t\ 28Gfp=lGBG D*f_2 /:OԚ? TR)0f-Σ C9VG6$yIR%8Al1A2TaE] J=8õ!oc|܎k;F=[d8I+zr'e3>-T$ QEV) (E ;Nrths"_nf ne9q>kw4XB%dyI({J|Raߌfnxo4s ZܩSR3 TI+'x*A}'H^HSO)|j7=>O0oK) TOL3hI`Nǧ/H峙 pt9ϋc\gRw`=j7`Y+u w'Lo> ] +f甧Q>>s$F@3+jRrBO" Psc#,OSF2O}$p7h+6x1Cμovgz{xq1>ƚ{\g6VrVE7%_ǔT$}<*8/úN m)8b!cC*1ƦG@߆K~-#-?v=ݿW[ٮNxm^ i1zvHΤ ^->O:[1IQo@"Z^8l(I)z&MMM)'NȐqH[N["X.h%Cg[8ףYWDKgGU{&Yf -<~mfo<98284VH hMwn;#g;m7YǕWo[a?u!g: gv^Ⱦ@1.xj>E_j˯ם+F+G,-L֪Tp<*.XJ/?\U8h Z}?~#G/~ D[kpFYjv1c7 ѹ19cZ m;Gm۝7)eN}M)TS}0܀NF#"hB v)K$dr dᣫMsya^1aJ1jJL;`2qi<mD1%!->̹Vw[]IiL7ye3D|'gULB-^ +Q@sQĭw*@QX4yτ4pL3C>ZUOy5ͣx$FHI@jNL'2g#K~ήyZ3m"τw 0|v/M(+k9.JisiVJKF\R'㭓#*r&v0B_ ]h".CCۈz(ڲyAdGx %yaxؗƩH%UXTA1 IQ5RC0!u~d5ޖ,qgsytU`4^L>qKGthsݎ.:ZG٥^t> Rr>8?ƹokɝ8 mm [mC֮6 wNpIHۼޥy%1mҸCRTH|R\WAiֆnsfۚTܤ4-[[Ob(h-Ħ*ӒvI' ĵ 23>v ^ esͅ"Mk_ ? 8WqSDpPS쪂j\lfNUnIJE$OZu~e]I >iկ\Sy­9I-Ͷ K BLj$U_,듕(|w^׭[(>{Ui|米5Chܰ''~'{f73T,r~*GvOTjvަŬf5j"{DfCG laģy[6хCwprK3Nk8?e)˗FtE&̗d >me#q{Y%ɒmYLn6eUV<R%$󀑥#T9iAxH{BL%d =)!__gj[߀^m 0+e F%R&jh}6ɛ7S]Fګ "Ye\+0㺼sɶҫNn~! ½Um[-$#+\'yBsm2q Il$"ؖ-XZ,%i@WQLUwwx <ڛ{_{y? mJ/Jj{ըV <ӞKyV )Bj:L;SF6oҩY-}ڱ䮗_>!p%(at\`RxFLqLD 2`O/r˅\=؝i1䣡)=X[m-ךROmZx.b? hD^|2 Y]Zbl9}*Rۯ/Ϯ͋ӽ|_22XfI d2G*֢y[́+:z't%*o'M*2ū-1s< vd. ߿d- vy qY&ƾ.a7κnɜ9N::wS0t9# OGcFe<>ٿknbc :0L7fʎ}CzhxtvYC͎6}߹'=4upNIGΛ{cs?M~E.|}+ke_zOpr ٕ|?G7;~SCe>h{}_Ӌaz9{٩/_|]pd}4߯]-Io\cT;EJ//3mV"vwNmC-ĵLJL$D[c}Ocwصjn39 A+I9:Zee&U`@Dkf9z˵34Z>| Pyd gN'9HJm-cM? ιym,:"9P*Ft&: \.it*Ř[T[x3g~g}8iEp`W_)PYOd$PU_c*>i:SJcM49[iAZ?ZG x5#P(P4qi'7Җ)`$L`윲BJ 'QsQP)')IFҋ+\>9ٓ\P!㵑.pIaL[)6bdԻ!IĄ 0X8"i."o_y#8$q*N)}Y?Pc'!Ԏ㊆}E,OET{BX \AK[YW^Q3؛qOΆ,,d1Sb U+$3YX:\`?vq1Y%"yYr`)t2^nh0z;zâXhs~\JU7FӣfaÑ#TFbeQɨ($IXlIiJ*"/x|b5_XT9w֣hUsk& y k?=}xl8FS:ųP2iK?gQz&\~AH7vE%\`R8"$khtJT@A!T nVJ*P(W!yv<>cn.6QBxuyy9!7%wx_}4+]O{3%?|sH}g/nW=G<0;sJ礥b8`03os8 P*kh mpBwzEU]KGsCIvvz/UoP~_^NNh>CDMx&ƂlD^jiBToՅUVguqZ'%jlʂ'RRC.e|bLEI:jfi.e ASHe P&H@xFSsLOn^ǁifjIU8lQByt|\]SϹM6RiCc u٩x\ΉtgtaӢzi>Ootrv'&1p myM`l$G. Qqf E S:c&&lD'dI:R %2+4$$ lCVnZ$|BD 13)iMGsn &SLz_=OE<`d]0giXZ+rT߻CFS|ƣtb=TsG\*?߮[\с6&I-cxkY IkWVMTq%O$Qud\ +výy7h$\F 2Õ-jAfujg)hM|9kYVx` 5XxQGH&9NQL]*Egm@ў9EG(Ih/,GV}Z5^'|iPoyCӈ>pde7*vY 2$EVNBZìp$}+4M_W|^7:y-la(!0\"(D ޜHܛ6[ܛƝt Ľj58EH$1TƠu#8ƘP2 ǥ"^k$h:PY A >PBdT|ł6FkH20 k3PӸ*3"9p%ɸr3| YBB$A)(( ѱMO ҭ欔Ȩa*mFNhJjeac$"|]*lNH,:S:U9t1dmM2Pm목FDᦸ%2|RDCGSC1M5ϴRpKۭIb6ЈV`-lF~ 0Ko?R_djdБNxRi]Vy2WigRȤRh\}EmLVXZXT!U+QqRҩ3c,ewwcqu'ppl!cT_]BEYGFZ+dj,@C@ w笫.9FuIRjP̹հR6t5L OSj>oJ+͢&~[,h[3׉(U( q띴 PPFQX>8{&$X Fi0i&:⩇xFه1KcŴϒH͉ 94όLCRp RN)}X>S N:R41صfJږ`;KۣzY~l B/{_ ~A)w8aWk)_bzӛrAOB yEnYf,^g(\BNo&9h8OЛDdzMy@UJ.T,u  D^¹au踒GypO"CQlyrv,<<lX«v2d}8qVԣd[g2t>2ʼZ&I/yL;?oSjP#2߽8e~"k׏V/W#wˆ'yzI: .0+⪧J뉛5aŻc~7LL[dy`>]_4XbSo(e͙ ^tљ}3g$Q.e87^rtN1/mwj$h{=!z&2^n`IYApA ik3 KD} +IM>Xɕ1A t:kޮ$ІriDHk4THfr|&UL4̌R3<>0%.p}3W[˺dOjoPF"- XKɢjj5|Ñ<(C^J;u-txtbG+G6I$nFwk+B?Xwy}eN_L`J9&=e Dp\ RpoMc?$- AǏoqϊxܘ{Ƞ".')݅)?M./M2-TDai.Qr$9p9d鹓V&j:0)6"Ji Ѐ9igJܳ-hC= zHAx#ȸ@OMt2,e eU A` _/zh64Do?.~GD ! VNػooT4f{utZcUfͧRuV94h"BCW.縸x*3+m+X{ W@[*t(6B::J[DW3yEW.tQΒkY0W%-7F_V99qyJ +|Pv:ҏWG .K)q/?*Ϭ0 P\C:fꀏӱmJC=*oHl2X])EI{ӯC7_Yd - ]5μ|dHKHE*'P8HLS84̨n*ùMM[ZaDUbe:(̿340T2\ABWM+Doΐdڴ)3͍n ]!\ eg]%]TU 2g}UUFYGW/C_TGX .ӧО*u74t;wYӡtkpj ]!Z юΐf4:z\Py!+Q:j~+CA0S WC[htF):ikFi]!`Fuk*-tUFXttu>t%&Z=GW]t2h( JjC~rзWtC z_0!Y/9Ug8X>R*^xv׻{k]J[Uv:wJhHBXPؐ|M3!A#T?=V- CВz"+'-EaVF`h"ڴ)kO-զ-&!eٻ7rcWsK6^'k /X{ a(Z5XS53FHk=j6Y$>~EIu0Kyp*)2SZE٣od&PZNرU3\}p ڈSZD-=2z*p=,g Xʬ=!B8,:g0KkJ)FVh"N~=^<4u|R2u \=޴L,~:8(m^\ Y 5C̤'pԪ{u\JYK6O?2}P:E}R>2fvr)"gaifϕ'$Y=>7Ge 8äxٳ`u7Iq"OO־df JA*)Q r up *-zkG4*Cy·*oOf^ ]zy7Jrk:h2 i1Eq7_fMx^#NJ^+vnZS63'ۏ)lSʾ)z>kp{tv/ݢW={I#t{?)m܃3U;1G QTB|;զW`Lg8FqcHy?i?vR*~9ђ,%'W(T*KU҈3\}p%(,'WY\#OPZCٱU3\}p%)R\7O\~2svYZysvYJp5+Gsj`\duo=$ITկ.*j_š͘5L8Vb &EE׹} bJ&^Qpv44Ύѥ?XOBN/KA2Kt)g4zXzof ,ǒo}w5%h=,3CLʁ'!$P9(I"\ "!gtӋie-d-TEQuѧwYޗ٘P@X?W[n5ő73tv7ZR l=,.WWřD3\o] T҆JJ!*$d&RbA\u龜VNueGN(2z]\aV:X皋$qd{F͢6'kP#P-GÀT@X<%F AWn!!pI 7QdTFHHA  F꘬g}Gph*(%1qv#cs\1,lL3B$Ja,g,|P,6}\"#{YMtP"/jfNcU)@"A1qv#⊋y,ݘvڴI^wF3ݠ>8r4P@CO֙$9ё8!*rH`Y4RxV|ĚeI*7 R^tk"0p_8 ^nLxX9Ï0"d`<DlL?ED0"3"q'5*nє`9M$P>$>|kyGICщF#&)ZqGjypLh4!0"6&nDvG\Ϸ\gcZr(.qq[5(ϑ(\?ĂV+g\|\<SN񯍊f8r1Po6Vi9, @ -y\V92$g)po6"h%TInJn$&RwoQ1E~}ՅsCByEGX _L+"TdU7F߽=oo~ ]͇ո5zɒPj?rhE񎛪 Scҍw؝l bPy\ UUe)f[=uLi%h]`ܮ:`F*xĮ%cQJQfϸ49dtF٘c?&xH\o,x}'mP1oVDɀa6Υs( IR|FxUHJ Sk y/JXFM{п e>KOWϺ F}պfwǙb>Ŵ®zŜXG$r7,s>lR4e_*Et\2h7כCR1N$߃n0&x6X}a: 9"17=Ӗ3C*͠4&eȐ!FG4fKäPHBc w T&!GmiĦg= 4="ji8$PR(l:zx3he]{CMqYdO Ka& QZ(W)@ ^Wp(Wt(C:λto sqq#нA@dHAhtu‰d,zDhJ*(84PFhvgG!}R(Tw0,<: 2VƺG P:W1ך3$Qc+ X$`vQ=è _jh+҃Gj&_EFDBd"RJmNW>>Rtжl;!w J_rשRbhxSdRHŖ0<5]xKN_y_%oc?q{Tx|¼N>˼AĞ|l','ALGthQD)Ѓ K*R||uѢm$,Q}<t 06ڴEa=z;rJdP'~3 E]vQpx?_~/^/Rf>ӳ/Gwm$dڮw`lv`F=eeJKR =MM[l yԽ LNF-^z5oj4o^Κlr{LM-١_,o Gh_5Q#f+( jp2#JT~J#!ߣCI!KB!<ϯs)@maD=J[H;u rtE^3|"Jx䌴`PNX+J$^G:aNyE9{8AĐl"99x"B8(T[tәM[!;dl7G]wvZSm(:͸C?d67MzG Oԃعt1c=5W8P^9è>tFˠg:SR=ee3Jn2gX 40#\~f]0 8-`MD[pD ιC U7!<B ۋJѲ:wu>OKBHJ=.  Uը߼>@:*θ>YK3Zj Ҭqu5n]攥v9{ʚŶW O m(;)o-5 l$G.P33[(J" [c[Ѭ'd>:R %P4$$ lCV6jZ$|BD Y13%-p1xΒy:8.52(F&ٞJbA3@M"G>{>}G,%CТM5w$̥R;  n!pAGژ$'mdV[ jMYSog|>{28v׊ެo{Dji)G{d*تXḩ^4$i0*2)Va \PEzAhmNPOAh',PB暃A1I >4I(7\ѨRxI@pfuhVc9|DLMPBI=~NG m 6,6y)u.\_ ];Z28$N8E$9`8 28HIPJ% hEtKXgJ{fJ̱#*mFhrn> 8p%Y1$W^HS딏Jm&Ys8 a".b,d: }9&ʝU[Βs_DMqKeRDCג y<ޠ晶8#] ri9֢96m-MhYW(Yikf17h&J(!t|s(}p=EVԱ R(12+#wg~| l!}dl9o;q׫<,HuPb49wpG7߿6ZBVXZXT!U+QqR}CU-/\5~k끗աNZ}AD[uhoBFR1ZT.ES"{1v>?iOY2uO[.NQs6jd4"BNjg^kU uJ~|>7,o7T}t!t|Smu\Ƀ|g x6Z7k;/u~:(X_黭s}!S J)_cVP.ó/y,0?{17u߹kQ2:7lYXKTdhz[gvΜQ#[YQ8p"Xxɥ;ż@MꙔx5J-f6y!/+πx"b8N/K0\k@'*N6QZb`m$GR@%lԓ^#GOEiٓjUmٍs6Мf(MDer4(Zrq%>M44ƣ?^酵xQg_xhEl$2؂ aY*%Q,.C2J#퓁fT<` .KsF*=Eږ& #!W7\TfwM&*Ku3)C 78h~(R8yQ"SY|ED\γrYo)~WE1c.dSNǶ/fԗmb=e 03m{YӨ4!:^#\!hWkD+iRC^6zxM8̤81]mWVh9Pr.- zڵ)W(t03t hi;]etut^^`w0Tj( fty1xſt:: B Bg~`|=]k{dAx}-@2-wnA5Mg ;CPhyi:\R,z>F֌t2`C:CW.=8OW% {k!BBv2 on{>~*j)̷߾N:H9w !9M3Yd2{ =[7! AK+7Y +w%@$*awq3\"f .f`Q$+,Ew*UhM Q*"{:C2IޥL]e3V2JΑ@)]+%*U+tњd=]bdg5Q`}[,t%ܣ*ۏjK'-QBٜftKS饫UyW ўYlK-3tubD _8bJ (e9mVZ1ڦZQ:Dt;C7Dr)iAk]!`sAMUdW*vBtu>t%p !HDt.HWH~KWHWR)8wơ>?T̾Ya"y! cG [E蝻pޡ2EKHE*'P8HL2s:W _sI}ƩӍjZ&)FC;$!`2\M"ejF 9 (.fRw2\MBWk%J׺Kt c Wv!.5tQqgIWU%8hw<-2GGmКtQ^z9tE7zr"j+8{ڶpO,]m&7(E1]ўvzJTCt\ ę}DE*#Bipd9(u/3CUֆlnI!8`om.,I"Ybڶ3G7EɪWb%?5G]Me:50j?"us"F]K`Ѩ+JEG@#Bc2`ѨD.cǢ]]%*TW"҂Ü{L>uUXP0jJTN]=Iu%H4ٳ_j\י.wF Xwho &XnC JQn\ʏ&j9m;$LTn)BBIҁ pF`x<74k_ʟxBן)]\өά0׽#*FGvLM>r*=j7QIvejv_z V?3a^`?܁t[Kڌuؖ|SHOUpi|_Y|8:5"W/ mbftYg~=0@&>(̇#}sBȌ!I3%ɘSB{1o{Uqa@1Yd*{Y* ݪ7c_r!/L_T\*4는ֽ\pm1 v>=١$o|G-_mkU tp[z~ydӷwyX{@j"vE1iŌ+F޼- zժ&7fAoEȘ&!!Frj=6j"$ :dE tԪ/ss:h !?v윝 80=.sL(Q ȹQjEw9lVJH?B"/G&'Ej|])lhlKC>;]N`lxTT |-(-euGy0l`9h~~ImoYzNW鸨]r*K(aCvTsn} _oG.疣qre=Rg2<+cbx~|7v:ͪ  u8xTUjs؆k>{r VGގ md[?xl <`a::0Q#{cc> ~[@gFd4| t<+;r?X;8Y5 \^T˲A Tt:M @KFP$<ϞcTfb60^lbz8G=*>=[/¬w)XB%PgL񩢭{%00 m&oj b.e,#?ʮ+ā!1dƼaRdQz%AZY6Qn(v}HK]МuO՘!C *C RS$SJMs@HRHIBEHXG"\JlB`{$  Y؂p-aI^jF5#ftM[ )>n%CR䇩יNMu"?keF3EgUE}tR9;pkC0W䔂("m+<QBdbh|AFƥU/m)HY˿Zh0&GxQqnU \k0TuZ4|:q G~\èjV~miQWO5tQZ|b>;]UҘՒ[ z̿al2 Ѳ^5"&u{~ l-(U*wKȇ2DgGARC)ߴPq29Mݴ/^6ߝ]m$Q`!xqrGA,E(FzhgRe|oBp[C`h˰V=`GqŴ,Yq ZD,Dk=Ft0OȾe.OlЭ DV0/gq^Էv,6ku.u:U]H6m7Nf)服r~@0EFJw!YcI"|!t)#<:\F5sH-f^bt6F$OP( Q!]Fj-8'\¡ ZW̸7ҵ z[91Y}LiOh Ү=^;HGcAۦ3MNG9tGwW}N{\0VPo.]mM]Db[cŁ!fH $#Z{1[,= Il-7Vd˽`Y_wa#m,#_xz):}= yo2VJIiR g/8ꦾ׼ƙhߵr=@kxHoFj-|X׷bY S}p[{V᧪Vji=RFFZ[4;~b[!rFAog `hf)1H-h;u&&`Aq8e9N E\ b$Bp`EGfx&,Yydc ڧ{.O9 1%0l\ȹ:q:>oI"K}-n%X)*>{hQA>MGCCKjQ\ZxX]=5:xm V3cBd:Z.y脷]6ɮ8IN>>!ӡ+Vpq'ZX̱D|.F@JRz\ a&#++NPQAr\ ,L\g6bPյH1:L7_ou-1 [*O83FZݺmv}$ە*]D Zd)^VOuZ%w\ޏԇYN}%yMQInd %N[P&LBa,!2i{l)N1Zk8XnmP-d6mC ywzkСݵJ\_x[ E6[4plX8 Tx LGjFyK#kLIeXIXxQq rhXR.K1&6ߊc]\~lb>.^"U?kx@F*l뒐ssW=u鸍vɃAٲkw^"m>nZTE'!Յ (F|CQ4n4ܗZQ` ̮2L+mG$mEx0 c YCWͻ}lxE6ֻX庝5ѼxhՅZcѤh4`f:[?}(tpf~_m3il&nm]E0eO}"5m`%GCԑ茖 ;0X+HrkGDp +7*fsVA` $fø` ÌМ޻&ocJamiOofY}aۗKBRE<2a_6Ǿ;Yo.06bz{U t~PreQjWr~1ANၱXjX6(SZ8SH Xl0Me{m};y-_ڢ0 ,6-u^?muU7?o|u{10⦺ ߙގ|,ԉ[mx"qo ~t;z3}\aocPS_Dϗ5ؤ3BY^2 hi) TJhpg(j 1>0am0ܐs(u8HIX x$ )Ni;cƌL"^ˈiDk45= ifgˈ&*UuήoN'QHZ 7dYU*=/,nݬzt3d:?ڥW\+Z͸+oL6&g brI'rSn1S~|o۷]׽h-bvV0ovusf"<΃]]uÍE"2Zo\5g?kmtmo]v}i't@o%V"W/ѷ w_Z3eA9Q*lIH|{߽ݏw]9ϱZZ+)0ef,F%S- 4Dp}_ Ͼ,Tl-X]m!̚}xKs;lmCh߲U/3IU)+! )ƇQS%3kg8e:CcT|RJyy's$4؜%VLX wkz@%?fR̝厲w(&#fJJÅQ.u0"[i&!Apʻ08apdS# zz}B49zSpM2bM<@+bSq!9{D%$ؤB:$R:"*3bgg3bз\cAlڱ+j㜨=_ڽۡch*DL"!>`Yb+άL m\"y-ȅ4*x`b^A ('Q>rRAgٌS?MX1~슈$3"{D\[DKpjʘ"!73mX(PJsS# 7CbKFbS, RuRv3Ij$f cyr. \rbi3슋"3.{\\4% ,!F !F3Fe@A D=.iǮx(3ᎄZ'q ~¥&Э _(QKL&'0/mG[Ƣc\wq!9[0%sH[]FCRb{^خ%*.U2.w˾)2A0_+.PHx 8P=tHbaJ+A]Fݒ|G/XPp+U`)Kk >TYQ,:>5:}8vXwI?\j &c,͖ڽ&-}p<Cxϳt!f^.]cZ΍ǹJLZU66&abM#ggc}0ĕeX&/$)5JJz6]+qBpN\ \%i?v))5AR4IM\|(wU.-k$70`9BO/]GӺ94%%F9\iɅ{s:!&0N LSi:r)*iG Vt`W+5FGWIJk+&`'W Vd 9iɡ@JDWd\%WI\OǿgRr[WlaNW&#pI%o :>( xW!g(>/&ZE b#͊6? =xWQeM(&ZK^jQJ' h?puKUh10.<|IPuQoY{Յy3 %iϊ4;ƃ0)nR4)NWW8KB5}Т?)y0B+E|3 op۳]=@FwY,~Km=6* vo yOy[8JkyǃOH)ɤVB RRE,F~ )!ݼdgpSRŒt}d2@Z~hCw[,I{[5bZvJq Td UH \%i>vJR>zjWTǠ㖊#iijp|] *w~+ 5 :d?*A"wfr[O7 ^5\ˍU9M B0!ҫ͛Pę-(T)ӠJp.)MEZTO)*BUdE68ǘxdU݁,S-< ;fyޡ[:q*nZsM1TG煻\)7 $9+D8ȴ;!r=P=XXE3 2X8l(KJU?wF$H[_)MS׼m9y&.3DBG5 *Z r36/M5,q01V\p|˰ ym%.QO̊.8OI*GanlfU|sU4jYЂ=#̔?זf[@@_7Kե週{ІlNYnڻieѥ ,Fa2G c ]lnc0j!SwUa}px)$(g@xdh%艤{ԯ>}*6YtT:DyKP[;D> R;oBмA޴\s9 Sfݰj$7.(]cpπجc;S\kn`vs=i:wK9j{i g̵16cAFUymBiLHnIʗ VCJ 6Y ЗI~֍KA_ZhI5jkT 99$]^83 AL7XuS bʋucm쭡ĺFHE`NU. )-K\ayX]baIz\Ye; R5rwU #" +ʄf`ӘKZ, |FEEt(MW TW7"j,8H&`.mO J+Yw|э!ՠPw^KCp2P(SP|"(^0,JLhW4 ~DUuASPbt,,x/0M;*q b LN QCF:)lL6 _ b7ڛZ(S1)E6GRFhљ]^{ZƸU]%׽6 N~d0Wz,Iӹ5" eFjuV":J(k! e 1P #qҰI&}EՊX{BufG ^ wXO%$' cEUJ=56`;iB`߇9;L|anTUz쨛 L0B6#&v3x:p<@.A/0JhIWo#f BX9YǴwE!y0_(Xq=@5@RMwUdPi1`Ӣ5MD9 Z&PZ #Wfiq#0Q%#p^45PzB%aPJ1n [EJ+e`j?&?_Pz~3>!fJ4St41~9r_~mD)8&˙dtTc\_u{/l6;[v^ nU2/v^ z@j@Q!8 .'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qx@A{BTŒђ:x'Pz#Nc,9ɢ:\S4&HHidTqzn Ѩ{ӇغEv[dC~~Eܦқzy΂^hAo_UMVTuDO[NHCAE>&> Qf>z4~5 FVlzp Rc|7ZhwZK<}R NRӪ6h1oϸ.^O_]!.>3`3(lxM"=ޑ!̆|Vi딲"=bvi;;y ^1O oyHF|sBnT~ư_Pٌ<ٯF~O3ǽ䐵v_ľ }-(0-' oQW~kח;`ۛw뇵[,䟜Y^#5 E>RX,Fp4?y>x[PoCi[mTpB=\r.>/m_Xt^?pǝ> ~Ulx?/?->=kgEu;L<ΝEοڔOY3%(e.b!F6Y)0fR ?,Ɩүt}ou.a?n!i*&MŤ4bTLIS1i*&MŤ4bTLIS1i*&MŤ4bTLIS1i*&MŤ4bTLIS1i*&MŤ4bTLIS1i*&MŤ4bTx}{YzIӛ~YNv YNx{-'PY YN6r)r2}}bwb߂f)Wﻷ`_~p}ݾ{{:N߽PxX}}yjuB+f91}/}Yhf83˽ԈYNrb,'f91ˉYNrb,'f91ˉYNrb,'f91ˉYNrb,'f91ˉYNrb,'f91YiUa;W2Y8eڗY-ʭ tDN '8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q舜@7ͯg|ԔG9^߷w]^?|3L H5zK|߭Ƹq (1Zz~JbNm0NW2#+G-l-ϛNSo +>?ۖB^/|=QrIO'W?²Aq֓MлI)l _~v["7g6kai¡4r/147"bVCW 7M (QytE^kC++6z=Y ]1ڗ؟OW2h#+D"`ռaZQn=:" m= ,e>kM ~>gg-&%I?Oџ|9p}/>w_ӄ?_Md꛵Uغ&6.>dǸ}O=w.2g~:g9>p?xf  iZ[&O)1qB{M++jzb:]1CXŔhMtŀ] ]Q k+FkQ:Brܾ8Eܝ _]|SˠzGpwKOQɥC?1_v49DAtrWɒe3dMfer|rp]1up_ƃFNBWGHWT uyp^ ]1x쀒<;J 1iv𸒒/Vy>C]ٻm- m/P`v >'n;̤E=lGIRX HQdpK`h( \}S g~IP=~;"|MqvCuDSlڃH[gg찣3?x>>DR0鐪{ݾcXeUWQH:BrbC\p&ۣEkԛaBdh:34tB҇?%Ei8]+ ]%ǾZ@@I01>kܦ]%uZNW e۶W{z);DWX3JpEg*U_K(u]$]IuwnU1q. <|sj +~o}`._˚Vp^n㍉ӎT.m&tFKp5h5FmWJBz?Ʌ] UYm;]%wZ˄R)ҕJ*!6EW .eBe*^)ҕ.&Ǿc]NUe]}Bt%zyG`-[YXPB֠+SS"!J ]%ucX j۹=] ]E%.}w#,FHsB [j[R:DD8)Npigh:e4P^z:]^#͐8E-`=>Zy$J-EW]XKQ*3t*UBIdOW'HW#.h[">) ߈c$BֈMŒt0A34Nh9j;M'SiƪCt+UB+ZU&S+ iwSJp ]Z`׶UBhOWHWRULh=&jewQ^܇дZ*|}wv辝H TUJrhy>- W,+~8Y>H3Йs?`u&wߞ[sxCcx0C4"a6b]|g]N 7pws迋6P&HԂuofg?j%YYBf2#mI +ag2 ~1Y %|fCKd}e]E8)=زǏEX~M#OBĞa&a\ Z"Po^ltqa3Yn# &4G3%G *kW΂h0i㎈Z, } ;GC࿳KChX)<,z̝vX-",gqoR9A p\~)ϒ "K^^1_u(aX> td'Myfw.S,?[W>*&TvU҃7&\s_]_[.Ts^/GՆg4/y]β|wN7f~m ]%ܪiRr3n:yan17qfzXTo߬`Mҩ瘈sE}02K;F/HTB%zU\R|NsO3)5Aσ 3IR?{96(js @}7=_ ӎR.,F7e_g8 HKF/ JlQ2'Y50h.X}4~+7#Rw &IaQi>(TM }a03U8r%Xvf2+˕ haBT\y˕jxlSf:T,4bJC:\6RXd(1T!4H26 ؼ66o$;srL ϫgI*@MdfOo46Hk,8h8H/d`4`K8-FI,F#z!Uxd!R&R/50(D b FрGyc" )4gB!Esk&ҏŤԦf[} yWv3\6bϓlƤZ x [GS_eĠX{MQI5A$AKh$AaPˣc1,=##$Y^qO =14A[G&Ќ2NH47`h[bd܀C,Rg2DO^2E4^JJ@}n; Z \Q Ƒ18l?XoE \l SdɻCu2{ۜ[bw6ioGovTjz>:cOxH*<#5r#`yDsn\LIeXIXO<l_^J==|j#T!(Hk9R\RyXHA2$%%:ΨFa3O#̓u&IW7{ DGfHeZʂ {H"su%WRp\F厂VQب[Xz_2f?P2T-inlklH_j^&zfU/i 4גc^+$iDn7F+2J燞xcWY1S89|0ţ8]tGn.v}⁹g+." LN>oU_Lϙp mĻA(H\L of~pשKo-χצt%D+GF:,| PЛi fyc6 ̙}{kOQ+mk*|f1YzrW=MmN[Km6-\Sz~MUhyd 6nk!7/ZT-5h\˱/UAX ~{^Fծ:@]Y ƴiuмgCr^zOO1}}Q`Ci Aҟ@<-pO%˅˵+J{|˔OR,FBޫn5/\TfcU`R'k"QK%ESE57V" 3ꄷ囐LqqnB^f=l5`Jeɶ7~`h&ه/yefDb2Dy:E#! xI[gv^'++pƜD;yl\: #ZK(IItY$R:"an ~ƶmaƤ㩬du~=k.0: Q=j"HzdX*&`43+-B[HlBg ,d 1+ k|@N@ p<0G(P*Hc¨_9'bl #6&OeD0#{F.J8D5 ec 8!73mX(PJsMQH1(($^2XobAB!&tZӗ:6g?#~ x1[i1)y*/y΋i[ #ļV܃=wm\:up7ANB?-(R!)?~GHcش43>U]uZڨR&F&k48R +;x,ѱ=-)%`6:L*9rc4E,ອ-d?[~2a?+frh+^]2ZȆ9V%rkUL0,sW׵.rǽoZp`R1p:#*$!CtDI!OcT1N̵2Bƍ`ɕ&P*:ĵ2^HwLxm?VOYIAoBו 1`[6urmW?}7 tWdOsy+WYkaõfIYnVη-q1t4@{ri3|Prz3ҙivfP}}ɷmC?r<$$ގˏ?IZ,d/K8؟?(~n-2aA9sr4J+Z06E L `Jq@ee79@=3g٧L޸"(B`$u2dtmzcqTK'=oI}g[fo^͝6ݪHWݸ6_Ġ!n8MVnח|ƾ]cwKW>>F#ݼ%1UAzn3ƵTƷԫ>~M.ܛf֋]_ s̩vsSjϭ[Rnͦmk U)&S)LJ@ f1w|w?R߽c9H(` E!SRSt,N4/'J3;ydf*r5[>CUYCMu ]Lсbԕl+:@%o5׿䣭ZM:G򤓱\WRѩKkl zdm#eIv^<:w֖;ۘj0-ᑫRDQdtlC@c4 4Df6RKŒR>K\qJ DLBΓsD&)hM-5="(y|EKنz~<v7"J9]ᵬҤ6Z$`!( ΄ Nʕ 6)^Mnm9Nп{-x941K>%ː l\9s2<9Rd.`ټQX( M+ ޝq)8n3#@Q'愐f7 g/ksN3^/ Y% AFsMM[Lc tF+(Hn\mD+JUK%1hsiv$i qh-wt~ʳ eVivuozy ѯu3N.f^-hI^\s>^rŴo%Rp7~h-:*ƚZ%׷tԌhjF6rCkE3{Sn@RtAr 9dQZfM/ *z{qzp9-jrO*KNpkmY/H3>^2X߿釲9;GU4Y>r}¾`E!39:yAZPd:,fIu>s&c 8#)5AH|(ӠYs4sYݚN+k:49Ď[ lw{=}w]9Dقrc"Ͻ.;k`Us'F,c`ttXHRx"Y!}҇%'g'Ykc M"ʂe!"hФ>f)b؁xִK9:5JDt9G!|Ny)Xe FX*< jHJ` 1bСƌPR;"m6-Ik|t)ǩ, iiB$%:PD8Lhe.)+ɠ2)>eq.Jsܱ,XimPL@4(Li.:X!'JZB"Hk**rfxokC9J8L;P`N`/iK"1/fK".%Iǿ%Fc%W%FIg3K? "3[X3@fFS9Ԫ*{V`Tp :wQ_)yȣ<d2̹G}FBOce 3$V}P5ҹ DaYk_"*t /{?ae2}샒V,BU\G(QUr}F^w3mψǽd]F#>Gu9|?Xb">_k*֐ Ս ʷ@<ΕFT>OLQfx1P$uli9r8? d,r#!"jHÓY Bc̙Il{^JZϛ˚t[oeZ~yv{:Ľ3Fgm*dLo2 tn:d|"=Ζ[8"ܪb)bT6e2(Q0D% G&Eh'HGElv.(Cȭ[+yN)w֫c-mb_l47#NдMΉ`5W,Șdٖ;)Z֑F=u:vD:vQRU*Xd+A]8@@@K݀2Ǧl3( n|HY"JUժ"(N(]:˚ѱf"b}>\N.zzr c|\j7Ѽax.rU얁F"q<0# +gU~ 1QƠBQ{nc΃oxIRɥ]WϹ^]Mcm}xj" z̏TzzߦN}<`aP;Zd_ܺjۗ閂0ʝ]9\[ %p6}Q `j4:f* a r{72ƕ#,]WVHSM%V1_yE2}zC@=@--1j{Nz5UWVzIdr@Ð%2ң^idL?pPs)W5xqGa`he`}}FT/* ˡ )|)T!EZdNBRr:*L MKcDFFY̬RzruCsU$\A(p~!ioxx|u{^d9r iӵG?~o-#e$ J%'+/ gO u֋?{۶ a\Ҿss8A }Bd%'N~gI=Ċ%YȊn$̷3Kqhҏeܧˀs6:p2N+-s)\ds"AMur^n^z/;]wO Z88$E\̥J@xS`53&D咇Ny I+O^spC;[V-o-C?}k7yk7wk7gk76aK𥻅}Sl]aíWX6J::A"uU3 !dśy4oٳ5YZU *=!>th)ٷt2Л`>wxI+ikZsͦ""Nj Kܶ4eFeBIy'Ҕހ mt6fM3*OQ}V=U+P[*UtPꎮN8D %Z]BPGWHWRiVl8*ct0co'V I$c߃VK8K#Aޮ[o:(S ah2 (6+ E6R~ iO:~'[XAHA*|{Grd٘? rӦU\,0mI$$ ԝIx&6-g`CWǃMhYJ)ҕs"UK[>~ŗUB)S+-+rZDWZ"DPk*-t[W e-L[ =E=$)9~n뎀呭&톖iŽ(JtU=>Cms[w+u[ cQ*Ĭ+ ڛadD2L )DM;viBt"NpekʄV4t4}4MMVeQk 2BW -kUPʎN 85d' aJpj ]%6FNWm/& Q_^ʫ>EǸw2ӆiDU}=mc`bҹVpb#VuM/{9r/l:tE ̼|:}S%mh%79e(\rCR!T;.pH,)rrVe), \KE@Fx+(T(hoiGL4JYHӔbd*>*i`*+Fi Q~8gWfZ$%32xR.fuzr|z;Z "gƈ\ oAPFbq4fh}܃sF\3<\e|,4qY?oRO<mlW_V w33wt &P4!"uFirD{$֚=!Fzb!B|ɤFnu! ߜeO7Ck,(?pJhbG=ArBKY儒n^>yQR;zGr6:1)S=q*\ S,ۿM qU.LbY?_~ 63PIWC_>pAH3oXn{(V LmJ^HHx>}u[W@>OM󰺗4euėjRëYpt|?d3~ Ј=,*Ԁdz>Z ڛoPWU1n:'~z0 *CT0N-(+#N+cn(=(%[^cnnRyMS pS܇]׬rPt C a DM)q;f2R ,Zb"GLG"K?V565ٸͦ/N7@,.dWAKIguW1" /=}Wp)U'e%]t8vr>aJ^TZy's?Tn*džӭ^zVfΪ7Bs$rttF:STi坂yQ8ϵ!+rDraVx$BLPyMBF,Ofe2]< `TD2'Tf A|p^^8Ԉjqe\ffb6*%3I1hoգ5 )JO W?.H YLm 0;U?8/\{U2 c5adPe78)10Ewzo0u=SҎnz F:_[ TYlB\:$URX跡;_ݜ' Yux'8"G]ɮLI"=Y(d?/yD4FEmT,qc-eӦ90}ڌRVOgٵeΏ卧³aN+0pG÷URZp$Oƿ>[u(06TvYwtFXM]X^0=z1fźpqUwJЗk]6rS*azUk|:* >O*"d%R+BY/*@&~ ۙ^]8~|>_>=O~<9&>;Sq=0 &F~}j͞>gP8TP߲jЊmVl÷C6yCwOޤҙՔ[*o_wMͳʅyhgMJt=_~`_rCݢ@kTҗțQxd}PA=b2^2Kx_n[;HHY6 ,t8"\1Vqܑk)i$d(|6,)//c5bxI D0^ᮞ Y ٞk:X1 k|Z!!V, 1nM k:Eͯ(e]k`2q+΀2yi`]&Ey;OlY٩4 ڂG?%el_`-O|1zNl4QM2XylLN`/&wY٤2nNl8魪eMeCR)" lN﹇?<В#""4077 ARiĖKвI7^-,[:쮼KH儽r02M:_%mXd Ln(K-{>c.t LwFvmb iΖf\,76IŒ1r0>,F.b$D0pܱZϸZj^0E](訂"usqT9?Ew`6`w'k%!RHD#9Qw08u-nӵԽdn}[y~9/Yѿ}l`qn>rx*Ex//ou𿫌[} F@1o+f''_6;AK[31dbj.ļsPw^ͯ0>ClsW΂#"_Asw[(нȭM2+[X1-,>385qm3t+-siE&F9~P Kp3"5SDcӆX.DpCLDK9 Q :-,IJw7w:p#OmYUV\0tw?|ض[A}䇲_bhI,E4si^av2XLG%Dy[tW-ZS. #;udp Gns[5ο9P =|}ÙUjd! O\~;-^kcΈ /Ĕ2,|``9H5-7(FأZ(ZĹwx5W ST bh`p V'Q #z`(#`E"`"RS`$TрG!e g45ɇ&YzKٻ6W]@ /AŢ/4#R^DQelg8S3UUuWן/+/G^Cn.|C"<빁>"ԤX dQ6(vQ%x 8M ZP))t>>u{fo7,ˤ'-FrYӺ|8yL}ބ.!TPjF aoT);qJ%N3JRc}@YHNSD1=ZSD2]d i˾g`tSEOD"*> iP$EI9+#Nb6dcL(uNL igY+4Q)S}P!*H1il",AEreA'|Y2 AFM9MJr|p҃O"A' Vڪ `4ԬӚA)v.fJ+5?PPMBۀ5%̵fI!h)0E h()brN.%SVa)=Ϳ\Jn_?(Oj+"& >;)T6&oK[24˱n6Seיp+{fo\tJx=̟̣غ[W'iD{r3|@(bZFwJ]JG<@kP :xzQ'ۼ pEnLD)ݙ @%Ի(ц.цW+vMGV*uT`JDiMX 2Hbt$8mHpZsJ҇惏ws ?kzYxwy7BWѹ^{$AD|ʋE&g<ic! tJ"1cwB;6z5n7=N;7m/#z+U:7Hvq9J~v2DWo9m(/ g^^&a2wY~)zàso0C[O??|.n.}IM:d[!jvӭb_IE#+G#;#l,۾G f?]7_1t1~-l%u3MtN 2*軎#G/]R F6!tT%5(䝎ag׶ KP$@Q6bkpRڦL06Q 99]ol4./kty:xP?͌W߅t^k3&&7ԯŢُwM6\PH!'o QzBsc\D<' n}42BE"19GwM7 (d%P#^bm~&&ꗪ$gdK4˜iO2Z4 h0RԈ+rhDet 2#HR (47!QspF3@]Gk9#xqXUW_|Y*HG +pJ$l 8IE)KX @L+36fqF1hzFY< "bEc뵰(dU_C WKFF3^ 0Ӄ}c8>xP&"m&H*s:X$QD(M/٨ WiYʺ -tp`PE=K:, X3lZ5eq j-3Nج HO*{PBޱhp47CUd..69%8 B҅CO S-dj ;^Wl%_BYBr5_v. _Г8-~k_&^EO_JYufPX,?q@P)ͽ͖ɟgۛlYrWƒ55V^ R.veKCTwk[1N^\! /ïC=ɋ;(<N"Ƀ&Uz9.*wO{nojzq! <Կ(((R[YGeں\~p3&sogu\ZؚhZR]mvN#e꤬/r&yE3NZ݁9[Ĵ |yNw;|@^3YnVvƃWf=tO7moՉ').;{ Y"4]]rDuM#7Z«xir-"c>#lɤZC:_"9 $1w Ƶ(ؑZ#FqM z~N7_Mm[?9}Qr#+{DG%;2@SɈZ4T3>ޑFX>u} o깡fsc'T,*-uO\zGCk VsyTDQJ㷩4V^A{/S>Į V{E77wA! BtT)Ƨ5Ɵx_O/Jt4 *MIོIɪ{87b ,V@;0oh4Qfjn4Wq|/}^{fx>?NfEnVSuD`2CSv1u`u `?dz(>{bcd%VP(r&wR ˢs+@deND)ut.%v8b fbb4 JWJZ"A@.+6+NNʐrA X)ZYi("jpDi tt|k2h#QAAzNVEh9{qw7WW2 ?nǦ_}>X{@SRS/BZ:4aڧ/=yŝo&`FƜ(T@¤Ⱦfft(Dݖ緣呲w(},&E=+39B DjX<9rVlԖ:rv(@ 1DFjJ14&KUDJ%P+ꀖ'f<ѡ|dS>?ϙln*\|I?Fy(i'IYT RP|fO1$Vg)I(K64UlG+IPxs2fC$^itdy/ 6VfVyİ4pS_%<96ܟIlqrȔJ[ji~iUe+m{IZjj@ͳ XQH0mE%K˃}8m͡0|y\0^M/_+[ltr9ZgoKHƄ֘h7fk(~0%4ҪM|tnz`QAWb."zhlsb,mb͎cli7`f<Ŏ12 P>Rc-%J` y2! >"bJ2#Ci؋:{5)TVm((>n&a{;kƄj"6ZD""QAB#4&8JXRF,"kK"JVƂ)>}cZbq^*P&jd`b2^/M-b3q[ĻCMlk.su6%E..vJ#͈GI6?d&cς ^vv\aq=}KN`.I[5`4]lc'$ُo轵yzPLίyMi׏V߁%cJ4O Җl>[);]Q.0-Hp/)'c1:ۉu wc^>{ or6`s \Ҍʬ IuiMѵɒjDJ>d6~d>׿\}6Æi<-[&WgW;^R؇d{} rl:< * s|;Gϑƥ.kwN׌b}= f=o-7q3r @6VCkP m~Y };k?-J}gU3F؎/{-^vߏ\yryoT r O\աMDΕK%:Ee}r%fAy {P AO+X%$}*?{Wƍ K]PU>$7T|))H-IYV\oc8Z59lrx t?m,K)ꦑ{! OӒ'qzyG>K@p1ȿO77Z\űӎF;1v<y)b3BY^0 hi! FGԟ@#S)5vF( Q! DB9dUP:Fq,T;K =-La̮^ׯ:KE0IW&)BӟWcw1x6;S*1yzD[[AЕm;[}P] gJ.dwcV"g]k ;Ҹoϼ|.Cyje&ucM6z(>ߕrl'W}XOjz(\ {U$\$W .CZUNxJRj3+4c* @XšU}$%G\}1pEzr8Ŕ?9\m&O WF61H6+ն]\%:q%"WIZTup `f,ΜQ.\Ԟ3G B0??pRК}蜻gpbHrr,={ L'tV{IJ%:~0M8 J+q0pP Ou"up ԝvn'e@E^߻2qp(ES ϒxƕxa0R_7ĽZ . `ƈB oD<eU|_1K)峳MԠCG+0ƬӅ, !l E{ BǑtYnL /6qt02%q+b%ahr;z:ҐGBPȑ1׌sbM tHL'+pF5 y OR׍qv/[A䉑w@Bco-eA=ZL Sù:R2Sp\F厂VYm8/w~dW-ZG!)CH)m]6 I_iׅV7j/0ޖwr,ك^P%׼4^^01e) J_A `6n=%~Mzr~uc9i^$_ʕ5f2 W_J* Gʥ\I9/JS%?bh*\'jww/Yɲﴺj/tM}TIX=3W7զ]GIíMv"σ\'Ua\ڬ`?oଢ଼\ ΃M邾[|jal[ 6Tm[_ _Exvާ|; @sRauuU ^R7)xhs)C#[/Fw2/TKҭӭo&f]\)n_9ޫ[ߏO/IZ1V}jʥTIieN_ |sR+]fPvwTΤ&aˤ\/z?_MƧ;>ӯjU/^ݫD1569|=K|)bGO2۩3z vq%u\?sJ.mS-=wjR#_Oߕb }\f4?0߻5:ݛы/d-x|%c wB?˴J S=ml Ռh_XE4 Y_*PV` F*%ݛճgeҶa<"y'./ rm>' ~K+`EŽ_%x;>Z0z'mhJWwU}X&a}3>߱]"|)w]|իlw,RTɄ@k\#2(כu 0v=ܾ]3eSc'~̎r"zwDAr/N/EsM䟑>rнHVTJ2 &b6Hi1&&xD}4]<+G_lccn<  l׷m ½Ǫ}&bɃJ=ljƍ2rd!Z0CbIdB``P).H9)F݂ೱr,IX9%ژbT݋&>@/]c07s@U1|[)^w6|US~'yrcU `Q'k"QK`QE57V" 3ꄷsHryG7bV0ՀيtBۙ0OcplN\&̼?:~L`-Ι" Ɯ(G cGw&3s3"F 2 3G#)e!07a;l}0I$XK}*J1(ܲ s=9[ȯqϪ23HU)+! )ƇQB Z"N'`1VYE il c4BF KI\388gB= ǣp[zy4 KrMQO;i,K ?wu8VU_4_V|B#9AYhuXļ[3 ϑ^|D喲7(&#fJJÅQŰ\8l@Q*H`B<UX-zg՘IDk1"5[!-liP.}rݓ'ymzSpM2bM,@ $rroDeY=Q٨LXF`#3JW™ml)ąs0ݖ!ԋ0&'eWaʕgeM{-@VT^`FsԊ[V s:*pp$D FLu1PDQ7P% )n`P00 .嘕>jK*Ff`> ,3r#c6qV#c>]%f[a4( wf> \'͸%Yn5AՎoǮ>5YUÛuOCS 8RP&^G%mys.x l\:%+#ZK(IIt#VIl;t$LEt!"'Uf&jĶÁ11mQDi}jwlˈ.OMpQS$=2,0C xřI-[$Y1 4Cv4H$ `\0G(P*HL884ǶH2#"CĥZCTTƌ$7aintuDRc7ys7i6 lcP4(QHLd$&:ł"V0CFbR̈MՈxs-!Wڧld[\qvҼLS Rxnb^+bJ=cT!@T a`ZH.pq_0ef<2c lc`łX+wAfp $qvǦS&~h8BtBqXt̿g!y ,X0taE$<-0c Y hAex8!XpM=+ IUp;oB!)hHF@1b"M)2j$)u6\qMLЛ 9.@_&P.7&P*cǽ%fТ%n'Ex86Eȋ5iݱ>V1b~rK ߩzOgzF_!8@h dfNLfgֲ%gTdْH-'4f՗.5z 7/:iK)?MR k+#XR"KJv锐uow}ȕStK 4Q\xTK)unm4 N>1䣡i½V&۶Pp 1PKi1A@Ouvos>#1(Co'`.Ś9In~-,CItot=0u/JZ wsS#Zj<<)OB3UZ  ~^yc ~NkI}wC\stw#W=z7j?,f'*uD'N^L咸U+ag~D'C5:x1cxᅼѾ9T3{8t2[l׃<-L#T)b7qT}2=ˏgwW i@3QݢW;h016Pz36LGUe"olċWcq5F(G O6T-:Rմc;nu/yl%9C m]xJSypy~1ԼcHinsƙ=$㫁ςyQ{L=nVlJ-듍24Bn:w]YToiD|,0c?Y2NaQDx~uC{?$#! . De$XqO܂ t/m{r~NÏW3g=›=w68ԕ6/'yC$)[ݔCO-ެ>ѼUޱڻ֛Qy'^95w/P[/x-r?ĵ^/fY47ޟ-0ej[u\#-JsfhhXy_Q ^j4ʨ4e.Et\2ځC걜Ƕ:g [Skl8[( x`E1RL΢ۭ(%p/!ĵ7kLn2Qs LjMQƁ gD ng.hCS0$9siG5mxeJz-G~<: ~nom~:Z3-KoSt$%ɢIj ` x}Q e+IΑqRHJsc:^(}:ι?olۍ_<5q< J+ŠFHo@SFPW ⾕ S'kpxŬ쩙uGe*JYŝ֜QV\Đ%pon?ϧji .hfBHipU`DH$Y&p(%T&' *ra鎑eBF 1e1T%<+7:rht"qu&橯̦gyZ_Z[v϶c%??L5% _WQn3;8籘qS8D-Zij'c?/ T@lgqHl! ;$rhxW 7^/(UDgcN.'jr( W8n=: ?+(& Uc 18:WťHD5m'_̏&K% qލN0* X~6<|~*8TdqDi1K9%wT~YoP.| _P燗|ۛӗ?}8 N_o^I9ty- zg~݉v V{74jkFؤih׳ߠ]vWOjRf{~V ח79U:-]5^\Y\B do\+*~AF_J s9T/suKyԏ7Hac#K*yQX)M5;I5%ย$!&Qt:cWV>AP1NP13HNy H]RUjkuݙN+g:8^KĖi ۡ߶';{3;P^sagvm1fav3Ѻ-9IhGM)Pz?CRU9<Fǁg+Z~%n,\<1T`@Ds!SH("DG kρg.M;*=b_%X^zJ!IR`̥̃XF,:$VD: jBC 5QANpi!H.Ac ҹ' e%SDred,*+(1BPZbBő'm(4GJ .YYFw^gO4mK(%;ie!a*6$:"#-! ArU HզZ fNq !2t'2!ӅU pmAHQI%(kKuE%Aݰ.AE#q(Q|0 pjq KF %&$,m2=g<7H IR$AUI !=TwsNL+Mxb-ȹԡ.$5Q1.sZN /t mk`wN\unࣃO1\?XI9Zy<F$ԕW)̖hdgWjiK$!VH[N[rD&%IX*¢Z$|BD #jBZHΔ&vTN.%|Q,x)َk_utֵG5SOo޺N_-DҔ\s7bh3kaK{`)a%pʐ=JxxLߎS q3ׄym"I$bjm$*c:k#8ƘPj8.Q(ozL@Ge<ȕ׻Q[8j '}l'*)Yq _T)‘JG^Y!.3זă3^9, y~Q.h+وVqecgx χES\p%fnѯ]}Ih_~6h}*ʊǷ?Y_hήKorVLΖq72\JyQõm'](1:Tl0׆4Rg-sXZ0;c"x'PHGu:pkor*_LlvN~\ *A12<*aQ,sݛjɺ`e&7N}UeKQ,2 Rrir£%-w]g^ɭuutYhuM' kRÝ[i>hȇ ngGI^lҢm?g$VxJKa+u$ ? g  D]2"{"yCa!xBOr&˼W>uIpIsS 퓓>lg5_ۥ;;^qe^3cg?&xX!՛`}h k OqR~ ˝$Rr" BƆT( gBt߉ iBZWg` L^vvgfovH뉐 3r3@ c {͈nZf4s3 Gۛg$`&W\EԚB*9n3WH0SlU&װ"Z]e*E'q7z]nǖO3Ъ]Wys0fǘ$/ ˢ JNݤHV ~M6wWZݵv*)<]=`kf{bt0kYImS!]tة+6XNbs?t:銭gAmݕ d/Zlx/tv^- Vw8__/96toɋ狋աyFpdVFZ߁P_M\2Z=M;Az~>i6_GL`%w05CZ=]t ӕ+67LZ}OWloL~yy߆e~R>f,ONLwP&z!.w'z]^&r9Jz!Z׳x2[^,w;KZmggM8Vo 9;|uu+ZޯN{5Ʌ$Bz\` }zò&qX͋x5S7^'#ȏRQA\ l}`WG H !Je$ n[w0Qbmwf}X{^/Q_6"տměi_U-JVjfKշ`v"hHdQ%,&xct=$T 0(!9!1XY2.TT.xóZnNc~lQĽynKSZjC>yJdu QkZ(KʎZ:r^y[ƎRkY׌R8]T3&KC HTVTck=z_˄b ؔ o"Z+ NKuՇSº,&άVիZFnY2pi49BG#H+7VeJ S #;Lp+&7XZ\j޿# L6v .Xx!uA3SȤ>g'1,1K N[!Y/OȒ!Ec}J:d[ ;Ojm5ArTeDMM[Ub9c$(%9ʂwl"ɵ STBfpר#Osatc5H ߉2V`r1¯P#Z)D>ELH"Y@-rLi{Rh,IZIL9עej%Uy+[lJR rc$d䒳"ac5V`&Y)dS(! Qc͕RljDt Ѹ hhXo rJcK-O~݊ uA9+lG RG(IS-( d %.dp/!O$s%µ[MP9@ ـV#$CjJ^W[ !tV)ơ(Mհl2< + ջZtt(McH)m'-Mn wUTb-teZ4r3l`ǑZ)JbYi=PI kNh$ d BXFH rJchuY;ia;‡ pĠuvd͛L tfź(ƍ585T& MZl@Pu4Ù@(qHseԸQcj NO:Z8$/Eyeo(_uU*T'R &bM]^ud$E̾}A[B#D^>Wuj@HN:ZGAQh8P46ʒ>fʰ WPmk?Za@<[gnp} [[E~bt@q"`>f)*P NR!'_06Ì6N< lɈzD3 ئH]v`Z:p< 6(}r,T2z :9@mi>Z.*``ffS@yE nX@ƂB'P$RDM+WeD#ҵa҅qcMyOy 0|IE@V/#›q*ptf,XNU~T}%*y*ΈCVɒ/YI|ǍU.ؽ7X|Ť<$J,xҗ 7L%jR"2[{(%'&bډrc[x %@@!Mv%`&Ō&ޢdikIp|հ IGd₟7Fu&Ml;b,0mAAV ?KZ426Љ l\j%-J~B:b8+c(X&%rd-A*ե& z+"ᨍ hF'V a| F$W ]`Cbhm7a{?yح0:/lxM/N&Zf&pB`a݁f`:°>; ,fwm*5fZRAΣDjaK7fSO"Fb|6L@ŒТ Ez~D6@2%TP2pyZm~n@Mr9VpJԃ4(L U U )RnCb뛶16m6 + EJ]!&5xrUpՎ9%҇exy apac7J1-FNU(&#!,ÌW=gU4+~(* mDRhcS$g6&g2"7 = ǚ*Ei H>{*3NF&C` $LL'k)E'G֒Do)?}057S nH؝HN dm1 & RUtiYS0q -!WF4S"q&d98AN=F 'y(0qK^J̤VH `~5 "i0n3;S&6_9eT$,MrA`$ 9xŢhI5.Λ,?u+[V%Հ[/ЯOb7-;=AS-ؙ5 5cD풳 q&Xf[XGfĕU0V}okyOG6rN|νhJlU"Rᐔ@p0ڃik @BY?O%~JJJJJJJJJJJJJJJJJJJJJJJzJ MAo5'_y)1?/xǗ ǗZ/>#|xt(_|8m?1:7qE9b3oK6 A}nGW>8Rp/YkAɞu 0kȞ`V]~mu2#K|PaW|o_+/>OP.E飊O]^>^(iP~Y;$!7*1cy|7?;>f{͝O/?/ߝZ,󻿷v9[WӲPJ޿ܧO/_'~9v/9G\ &>늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻늻竸Yh~wV_-lQ+X?ǘ:s:XLg(}@|_ĭ_ ["iL!'Y*BHfҮYaQ벐"ק8!&Z^×+z>d}o_|f!ɃJ%?_-+MdztڟҖqhoum4l|>[ 7+(Z6SrC,dn I G46vK7:Գ5fn ,h \SL"}5pGU^X\/f~r{᧗(?[8c"~X\xvsaEٻj ꓙs^ad?n/uYJϖ3`rYߡ_WSߧCcAT?")%ŔZI5l-IceqdܳAGDJU҃Nztb =(A.M4tj!u(6w2ڛm~Ycx*H_6,Ck0NnkM&`}ar0}9][Le}m)Sz6JFw_6j0\a؛AA&^ spCdӫ7;fds^lʽ7يfm'۷)oR.˅@ul`YFkoX/d~R-µX bZ,\kp-µX bZ,\kp-µX bZ,\kp-µX bZ,\kp-µX bZ,\kp-µX bZ,\kZ|ZϨZ//~PNb' |ۅ^83+A3# Qt6iK.x F)ys~h?|& uo/aN  :5Tj!t~Nw }(]{fȕbt6Fڣ@Q,: V3|w`Im-{$+e,ن :0>9z-mj 9mJ[Kg)<sHtAxcO+@ԥ-vKAH,> y.b^>a|>(n-&ͮ^!̏MZRI 9#&j׋1kV vkJxU*Nlw TX _R_cS_ PG7Aÿw:RpI{ X]_:1ހڳ0mwm|n݂%3ehw:WIQ}o;J^-l3C-A3'-SqfozMw-?MAAukk&& ˦5 F1Ju鉶 -AQc3>!_lʚ7Kwm p怕po!}8[ N6Lt *2x;4Nwxw xhᑇO4_Xwd& rbP_Sc2r}Y TYm7R8BFgqmV0Lj9ǎp.TbŨlqOO)IEqʴї!XPZ%UһWGtpYթ{v)sy+G<<3i\J$`0sv)燦|yw9k{2X A}0yTQ͍c:vFr^8ӀY? 9]a^h :&@!uFeӀ;2i\K4N-SFd)S H+̏q A 39:huŤ𫱈yǵ ˧HZ.˷XI눙&p!l@E&Uw  8JT&_<UX-zg՘IDk1g45[!- gOV}JQ'b=F,Nia=Q ׄ((FD|x4`gFE"ᖛ50$5A0,`DXyTT$6CvY$PNbXXH5pvOp5-Թ)LRyS &' &ꤧ @1 8h–v3Y!sq-g{߁4Ga@5E#!H0B``GxpTBEU^\$ @R"`a'TG-&x#30`̙rkl٭al0d t.K]B'6esS7L7?y AGfDbWDaGdzI[gv^'++pƜD; ]'*u16N0+D{CJGt`ZD"rReن[cwLS٤PsjEEkAK0ѣ6*rGbfyA82)EpE iU3bVA9AQ=4&>66ۼOE#fC5"ɬIшE#nn`%2f1#73mX(PJs&/V`# 6ABb%#1S,  aKLnxU[Ћ6c|٤P(2EZbы[1M2K 3ļV܃={ƨ G:H@cSч٤P}(3|v̩u5r7,8ʹG\ ~`mBYxEp:6էCEǸ c`i Ŋ9+˭"i!KhȽCkzhc:$pg!uo&MLryå O!3`NLj94Z#,8ZI0%l ԭdSq-LÒCjCo0J^f.&P^ojxȵk^4]7g=%SL\r.w{v7 3V灖 Th`ufJPn^^nG&f6\>(yh'aytAcC$T8|1T)b+t@H7'OjRsQ bs g"V!R*vxlTsvyF[k.ɳ%{eHLϦ7k *Gvwzqh6.1nڞ)ǮЧƮԖ[_6R67y7\;gsRي(`^lbힲXQ-F$+)0ef,F% )e!07{wWh<4gg{{fwhvZWnv=oW%w_.}CRNY<Lg{۸ʫ Qdž2_?ڶǺBa}&9lzv֭*XoR#+Q'慐m*/%͑^300c,W~$El)J)ЕB))IM&J9L!&ISVLDa߆t8Ҁ&#G1Es\ KeўD$USWEh}릡҇S]_c4{Z ^76,!FF)y?|k~s( FtD %3Ww~$ߧj>5RC1YlF?$SoAEzi|trr@GdK&l) C?p@ә+Va+UJcN60 r8lƏtjP,H>scozt}85DOddq~qeWK!:τ4i3 7&~gYl$6YߖOK3&}3էih ~nPx\[]xm}&FKF̹c6'졒kjN#/478lܺIf򏧓x(`uxzƃ&Z.{Mdp'q_Aϛ0AX=V',RU85B@ #zB#<U9@9C^ yt՚h#G#EEkANjW%hMJ;ker 4Uhgz+.8tMS.)6/Ηt'=}⻻~ծUJ%.ݢvQ+3L2JePЯYg^ßY[TF] xzLR ]}͈G7z/޻< +]k4TI=z3h>ZyYN`]P뻃`9b90^ :f~(rz1~q_um4|pEr{BuWRA#ĕo}\``: P Q\Jn{\}3R7,z5dqV?4jVȺjҨnJquۢBZ3 Uu\Jz\!i[K:6oW'VUXj mM- xMB#-GL<{n6=)9kXzYZg\*R/@JUAiW{Yj TtJiZ&G/sX^P~`~|qkjM 65<2'/bP^T*DUym7݊syC3ܔy +J]ZA&=Rip +ɵ.v[E;B*fZGAf+x.Bu~.TkG+PJPvXW(׸l+T nN%@#ĕ2'C> Hn>BWRvpoXz4X[ ~㪝\}vjAWT:-\=n[\+UFBl6Z3 P-׺B[dz\;(F hQ-$Xl0MrӤֺcT:c1q:'\)cPJ HW1u\ʮq r`?}zg1:Hqܰ͐VWh'WغjVȺjkM \W-z!EF"Jd u\Jiz\!ŭ]݂ Z숋]+ Kk B|;(x; k1MrӤvު$zL!SW$@6"r"wW.#\`<\Za+R)vo9U`눙x`n&Θ֯Im `7aBȜP2LْCljG*Ϳ#4,Ӷ >-B6s:H=WX;ip&mW$g+R+MqE*ߣՃq8`qX6" H};7,z;8q6-j'A7~W᪝J1ʶquۢ+_Cs3v9n CuL I` IS]4>BLK-}^qE-W$\.B XqE*quR(2 nNRk:H%=W pǺ"2T:qelMwăVrw{ÕMfWgxABv B7g1 xNn$c\s1 I-t~T~~MBku+ӬB\cvLj+m2cнklפg"U)Iۗrϵh',?drΛ1ۛh95=Vc ޣ"Z^Dh>>2%]1͐_SUm{k,<{T_Yc I'c.BWE))*--}y65Eo/g2QUOgg׷VōO*$tt/eܣruM7y8|?;}+r#n~]rY\ FnW^w).V7b_JM`swDŽٸNN2 ) HVA('IpI=?1J =UR >aUIaI};uHo, 3(}{=ސ pTp:[l!O?/LdMowMdHGUօQ}tM&8㒉Y#%:U<V{Ii.0K 60 iws8h܆2m47nެS\5Fyym߯,B''t}Z3z0pSTOY_Iݱ4Ofo70M(wO˧{{݈PT-xq1?]^x >iuc݇8Nc+q3[Ho<rQK{ؾc;kRzwMQj)mLIch(Y|2]9<wN-sv*:}}-;g:YZHXy2kjuMr^eS?>)TrvKd†^NOID/Ջ޾[.yO/h<{`? }Vl'U/ڴU5շ␪^/zh{ݡ>]zgvSn@Rr80Ä._OGT.ͻuύFͬIbDZW_fWLeM|U U<6&ěvd?>hRT\6 BfFstA;Y2tKY&͒|&V&j1˘Kqv~&x 0}^p&su. zөbۤ* 52羉y0}&ey>')ᠭƎ,N&y7U)4;aw'$_c7Z$& 22#,NeYRW!9})8doC'g􇟠UpD{.Gvia^{l_j@lD=[V^iYAZk}K)8NeHl0LK"8/cd@])(E2HdPt2K ^:{9n/ 4Vj 1 Y@I4EgK>$XIKHy)rTK7WWEHfc!Zc#:n3:6z#枉ANq=h^HF|5c1nrU*|5輟:s_g1;Y{dVI2;R-F3.GK3jq\4#h >-8=r4 39"pMZh1h wͲENjM3A"5e%&lҁ`ܝ,2Eˠ BW,9]KI}U0ߧU%w/ffi^o䲅BXJu]?.uym aNk|&ڰ3\o)̬*W9' d|ˬ1Ѥ O|\p&.',z0BZ NF&BetVQE@/ɩl/1:gOB^&YH $A nB4{8òΕn˵ZqM6N@7)Pu&$8dz/<jtٹ"Euy-xV>X5߂}܌tL8AN49'\ c)E @P-H#}K:;";4 ֥i|Ahkd7сDjd}:pRнfjmQJUgQn#&i> WY$Q)*ZU rI=fivi~8ysS5{ibLnKq8r7V]>O|Ȱ@m>x3zh) 9sT$|^Y]rNYATb` C\<; \k's:j-F{tkǫ$s:~*tt>m},|̾5t'ZM = v癠i$2ҼQW&Q3a!t1W!Pv렐v [_.~:#$no 㛶'i=*P :(,Xc?(4WrPli-c?(bŃb-c9Ϋ?| aZI0JNV^ W΄O u|/O͎'c3+?^XNttitSZrQ㨁$[˘ 2ܪ,MQٔY( r>mh'\fyUAwmv͎MM+n9~ۥULm}NG0BԹRˆGXuG9OQG==Ŷ^=x4̆^ŝhd;_JY=OڀPvF6zV`7HA@2>grQDHEV5Z.>#hyzmkt n6ӸG୬=~ueA~K>*`;_-m]- %WgUJ XI;3G׹Gt0ΰ@6$H< rpH87[fd0jM f'0rAwʓ0pH|Z/g;Q9'W8 jR$oetZsFK7=?VnuMb$y ӏ-nTu}f\Rrna 60 l9Y*wemI ֎`ƶfa[0kN6pp$cd66BY"꯫άczW?S--^]]>|+6]^k}nfVe__3]]?6t3L/W5烍=o0؝k3\7.-,"|no򁿹CMt6b2o7ׯ}>q2xi _Ǘ\[폫.)9cT3mcƓ𪲦~ /?]_Y6TӋGτ |PVr[姰nת*ܭj])moF^5 \mR`zWy#ǮqP (BcЍW{QrU|pOwU"O|S+t*]w՝@u!|M'rg``q'ѥR!kou~mrrWM/?ܡRi>'qa~xT70ԁB cT ;4M7yt)ד-EC _U{ Тf^%J-EL.p3`>Rl_@f ,:KWQ5 q*.f,E5`e=XA cTZE fuw@1GTgo ߽nszhn {Pv37yiֿ>B'|ߏ^Z&~^3)JgrsQPr1lM[t7|o250"PjǓ/_"m|;-x\;@*댩 ɸ+!AKS֫xQ?wfu=IRi}nx\kR9*= Il\LFi6!2vPIQ,uLs-'˄{;Ǎ4ᣱ&BSQW!TĜ7eשl$. @#l|Tj_=kx~U{F-i%v-/^;:vĬ2PG/ZPupV6xY YFy`՟~g~FY .6LH%P9(>eEU@@IiU3) )E/(18-"QIVgs.{guLiT˚akptZ_ |NwuU_A=k.gmMҝϷpC0<@n]DnkvC׋]gurvH7$y-DW];T6iaJ:0cl][d]|޼ql>5zmFeύOd<~姛o=7x7 |C o7㢹|eӧ;dR/m۷[/~}6>vM˭͏d n6ww`ܻ`wB߷a{/mJ1J*#$IL}vm`%5jo JDE&)e4% 8mhJ]>%RSA eOК{xjfj|], m } <>SMƫĘ٠XE)[cL[ LR὏`\P`xgU*5u $+' ֌ɠEq-l^Y:F= o?+v}S6ܔ\oW,K 8𩯋s ǹ[n& :gE>WïؚޠԌ՝k4,NMy wgl4UK%*LVH$1qf%W6DVVB V ]'4\d{8CJ%U3uێIXC$h2cg3c(6L̮vkұ/k6Y{_ڽF}p Lʆ&X:%':bn f>eY$Ʈ0bk/#3=vԨ@LC7/RXdm&sQҐmtUFlDƠhpjҔG-XF#J {CcBseFl fF|w`=tӥ֤d_^T-"y'cw,z+)"52=$QF!N"\`Yh!x+|ؚtˇe>3C(" 0)'8ޏ~\br\Ny)-]U(s2So@t `A82]Wnh ?]vte{w),&J +,y1tp(]+DieOW'HWkYAtň% Y ]!ZźNWRNVjl5wT n.DZ(?|ثOx9]ű璟> d<70CЊ3ATHzSdIZ%CWRhjuF4}4-(z DWr] ]!\YVh :]!J{:A$BZCWגRJbżkWIҕ6Dߞ.KiL(NxɛQ<{8Gimxގej<`Q?&+JJ%"j6Ճb}JoX z~h`5HB?%Lį/_pTs_ gUe]#ֺB?kfvf-$ۨ% W%V NbeU09,NOK T- R p*!U*!l3U“Q 5 6>]!&qOWCW)mAtA\+y)th0|BS+ˬ BI1Ǚɮq'CWl7ΌPǦ{\5GZzQv,Sj޶}rE,-0BG޻]+Diz:EbFloBdEY+;ꜳ3ʹ.ahSM`T14p*$]ii ƕ-ڢ+ˊ*t(Jɏİ`AٳCBWVخ؉hOWVJJB Үn9坧+D)eOWHWښa)Wa Wi6 *:Q5 Ճ':yV6 /Q%" ʲ,*Z8%yHQo00-|Q=/-rWFz!rVo33ʜKo$V\`^gc(OaY Vp,ECBt]CwꟆ6'l!\SCteCCt .gO 0BBWwetyte$ r+9 @gRӡ+3,wҁp@ˎp\OjGHQc;jߩ {DDWXb *] ]ZI:OWN˥sQ3axr170 (k~ˤ0ףޤA1 `ľ&`u/JVXo;m#i% .P8z|3 ަ`͛f~b1GR:J_Hr+qydKs.G$*8u|淺_ Ec\ד} d8%JΩGJM17/B:P+8J'À˳9w4Viz{Xj/N :_he}9z.IYb5Ku)l핵Qb@9SazH_!y$H<$ٗ/+K$iym$ڳ+β !yԩ>3ڿ*_eA4=r\KևS2Z3i2/CWnвNWa3+̒/G] Q-+wt5Pj9zt:8z2:/~FO_GFh}N{!ùS=y`#ܠ"^j~7Pjٰr_2V=QpY-f@K'vtvtaWsK+էdtZ *Ze풊A& qnpK+=5@FCWCo_(KJ=;\a:{aǡ ϴ1(n$`++tC^]pTˡ@Nΐ^ŧg7p+.4_(Vfӓv#MВT%gRhz4 'wf߅-iAt5Fh :] ]!]+s> Uj1t5@tt5Pz:CbބCWs8^:] Q ]#]pxSQۥNkƚM`M]5'~Fֱqw{:-9qh# >8u{PZIx[ʏmu2:h@9gsdX{%mvD8f=_/6jջK&$|݁W?߫WvA1<;ܿ^7E^j{my8GBG1͋?8~SjQ޾ۿAԛ5ww71~G4m5.6ߕަrya9Xma_bW/|1ɺ/޷pwQzr'ն5q6w񕬺A6TiayB}* O(-4mQ7~C|k q =kKO+1@ں/o{zր=ɰȱ%C5z.ؚXWˑUl. I%߼ww? F ^ܶ:ۿvLܮ/k#\JRQwW#гn*T2ؒh1@ 7}MNthV՘FTU*a>\MtUu|_`tlƢ;BC{֕Zɷ*BE:V,ߎI]efj͉!f-Š D[K!BMP![%k`DMԴA3j]^u9E"gqhѵh@wyw I5K3uZysd6CIqiJM#$B"SmO TǘnӈfhE^3i*TT rz7Dqn2vzM#mF)hmpG:%cM":IF1T`&?rih*|/Dޡ͎ZG(HGUo$7/U*ohSآ8A1#3X2.Lf{@)O͹tޜǬ*ګ;9SSI{f S $]1ڪTw$ף(]Xc1ɇ%j]Z#vNR -rI!a##$63& H/*M(-sU.RHQՒ$DH,8dk GCԨ*M|NYb@tAQ]|{XMɂ1!KutAڳZnB BQw_Kͺ#oG0L#mTȗBi")K>$A)@EE{0O4uOuh;g۴l(*g %Jr Xe]]/ Ţ˳եcMm̭lt%ODHY`(͍kLV0Փk.keB`T" TNU쑔YlXkYfbUhS%vdZ[ElJ=RsA19vx X* kRTL `6+(\d$ QAQ)tJ௒2 VHPcLB2m v^Eo[PB]Ѳ܁pYWBnTzCZ dܠQ (0,LhD;iש;".FC(dԭ9ƒ9 qN0t Aq5 7)ȗGC%:c TO 4XG9(y3XGfk!\(VZ h2ݙY)(J892r$XT5O,)Rl eBRFAMu2R*LR[FgU]%׭TƆ,f^n!G5՛F_ʔMtA e͜`d5."2DbQ{4l {xWP>V|p.#hҴAH۱~ACŌTUűb)"9YhQ1&PTؼ*Ca:iB1"'C8Ӏ;OջqkX\ɍk*ՂY mCka&a-A7 /M*}tcU2){H\!YZe1%OpHv9Oef:#.#) >@E&rZ%d^S0P>xK!j 32]Kc4/%$0AɣIA'kԭ-x $nGfp6ᩇ*YȩՏoTC}^iyǪmC5YK!eD!v*"}Əǧx wYbTbrѡ2&X6o,B&Ȉ` A]jr6l.fsP6DRDg6C}CPGm玺a4(},=k q[O -Ahg[RC;V@@]C2bB ;h gՌ#{v=`8JA"cQ:fWci63+I@DZ5J+1A2~ȃR!*8vGyPcSQõj`Qy0 !뮒BiY&RM9QbTj^ MB:kϢ;4IyTd52+io jyzrDm if-w$aU6j` p{ŷ`TLC\[`=:=nzyr~ _4usIa]!ݸiFOʖ=†`#೓*f1h57tk*fkM!jKBq4RVO ]j313@JֳʌؓU <%2`CrX)lz@'3UP3hogr%5t `%(H HČ, =>rwoެ7al *'~x߁y 8McP'7 d!G߁g] +UA 8Ca( jr#F5X{Y;p`\SclFeS*ajfМs3;k fnV4HF5kփ*M3| ڴqΤy@LPXZu!; tmszgA^ ь%4NqeAPW ٢WqdͨAi Y'XʙrL =W&3 M F:`=YJ[Aيp*q˱ %׆.I2[1xx@A4T9vl) ʝ6˥!`PrAvʃf-aVAH28ĦiBeN Zf !Kk|A9mT h]wT`]pJ&^c#ͧ~pP4nM@xݙUi(M}@q槟Y f[[4iMuH$t믿aX{?]o%vꦔw &풮Ǝ/m^|ڮ/?؏d|c>~?nI]|/"D8޼'_~v{?^b xqf"xڶp~hmܴ~-=:Tam? GPeKr3n9N ~1N/Dr靳th N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':^=-@ehwN ^@g IH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tN i 'p#/ ;JfqNyqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 d<- mr@vl_h9y'P Nst $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:'З[!Zϫ~[MG7݆/_@Kkn~Un] 9ҢKrK1.8y%d9>ztǵCWYr@=qQ8tzv:cQhCWǡ<ٓ+:H[^;X^ ] F/ZSҳҕ1FOtu>߻M=# ,W](ǎ;_7ˊ^J eW]is7+,Jv=CUܯ&)1HҶ9!KP"mK`zFF:ig4E=f:g 0r4pE*fOݚ`ƏLdL&)^)EccFGcJ |*IyB_Ӵ:"sN<s8s7־):\al8IbnWwf'O;wOߛ9\<|Hߖ?\ȏWo13, $XpS P=}Π'. ]W84 _ o^Ɍ \wg0(vǧXz>Lnw:Q4Y { aMy#W%U`N]sk*wX0>zMValf8qL2J͘",QaL(E[ }15x gUPu?u^B Izϛ$17 څhrVZw>tWIZ+u^Hӧެb\^ص໶e5՜ˢ9ɂrrZQ/9sIq2@n.ٵz,9,'PIs_] pIІN vUwPle2: +1=mo>ݟ ^C+_JvV^횯.@յր[ޅ||0qiU#ܨKV&MuN^VRL@ypŘb\HȠtzͷ^0Ѥn7>Ѽ]4WC,aU\V'{D _Sv|x^HgwLοސ\/}ml)$TƂd#"ᷠ?-x.|HDu _ixQ35f01-v#˸#0sSCF;6'MKR0\fs#3|X<8 j 8wh̗ 8Ӳ 0b4CHouJxg0cosmV~,ƹ-vɻO-tehXR"\̥i/(0zjtzfƄDe4x[z;+vcw^}MroY?NƠ-hs3bܛ%<#u=km_eÞL2|B0!s 'Z1ӚnjA27Ĕ2,)y10u8 l R;ȷŖq%?e(L5x̹wx5W +xELh ׀:%Fvݿ6(WZDAzdKbo Iݑ]sCF]OR0Y-}^߻韇s*!E-d$iJ{ETHDEp5€ wr[qⱭP fvMt{]3a\|>OrUNir:Da<}UYjnxzQ \VYX-yqcmʶ[,úr]wU`(d~_㴯ƦIߟu~ e#1~ʕ:P| e})+?3-c1KN\U}ƛp=g7| ـWMoR1ExzNx~wid@]w8U:R*Y mQRUEK-e1tf8M'bePeȩ iw dd!1CbIdp4Un¶ \q|c")& Cd2yj=:HHdtL] N%N,gc; Y:)̃XEYBIy敶  hO[8עE5q6{?a`uG1&I HBdowlyfO6'ÜQ* YcI"mC: Д(/+A:'~NQ(Lifwi)2,8'\$.S1G Eέ,pfv;X+JH!mohME+tHnQ󕟴ó4sa#}^/.rh﫣'5r}X <aL :*ްg4./cmo~u~(u607 -.VЯ PC PeJcC|;c e 05%a4UTsc%bD~9fRKaQd)p3,ie<=V~ࠕ2hs\y#,|ґ46Hk,8h\2kjB0ZN @FI,rr٭I'׵i9Dz~7bz?j^=q%yMQI5A$AK~)AGPrb[z[ˋn'p2 g4A[G4QweFgrci ` -12n!w+d}!Yu@Y wSe}H]menym뫩WHwFEɈHl`:R,7J`I8:GF E8i F++M|iGBPȑ1׌s&'JJ"u4Q8ç3h[;ra72l9?"pdsD95l,1{2jg,wPԢFAR̠/?|ӭ6zy1nЛނL+_C3SşVa6 }_to޷I&֦@ś u\?*9/<Ϙ`DKکjy|?P߅4^DWS>Π}3w,C4~Ed:t3c,N;i<4e]4(͌B:2}%A>/o?IN*/v>)-ꔿUIAőh+ΖUs٫N8]2ٙG3oqڴ AyzSj:iFڼvU7ۤ\~2+kd(^Jŭ]e~߀elc;vrIk$ɲkb|Ce G5X! Y$fۯ)W4PUjzOى7Tev1y&*/HY:Pκn`HB9Śv5}E?p̈rՖy)US.dtVǸaissyB%_N ~Cq}y9N--oL;kEԚ -'v 4uP&&ՂD¹7dv4zI]0.b1~+][Ӎ<28?i֎aA@}eZNW-ƒUֽ(mݷIݏ['/:G @tFL;3/M0$`ut*n{*N<()R;j 1U9 Q "QTA9pjU.C0!B*Q S띱Vc&= IL&Z ih&Ζ5+}a;լͩo.v`xrR!0I>:GS}tz?O3~HJ3.dx92>o-޲͘*S^Fb&E-wz~N{I;VW5 D;ÛN,L6>lTB P*Nu,HgV"hP ЦRr.Mc`$gk|'H\hNLvkl /xZ;=8w7_]pjN^;322sB(!&cɥ8T8 kUQ 䴳FEC¦&:Q \.it0&*yb kp1qy֟@XVxxMћjY>e,²rg3D?b-pX |NyL./9yG^i®8C%v_P?e8q|}Ut'% oH<ydp,ҸqBH0茥dafW.<iED\tv؈n٥Wmkdx_}'/"0[pWÆnU|v`x8o`g4s6'E9|z:y|دa.6G#.Kꯞjaq6&-P`5ѠܒL+r:"G"{}bPM5e)ADXy9UPԁS t5t,ޤqf5S)I>6lQ.pEIØS"m,^Hd VEDVYEĊ 2`VpLq"r&)t Tlknb\(]j\Qf0ʳ@h$WysBmwc'w؇^:4}.awYW-qKQjcBc+WtK+/vҖϷWΗ җb?TPk{?WDTOQbڨ"NxH`|+凳Tq2jպ֡:%$r7Twi}+ޞ6wup፦J\r(Ih{pkC N!'|42 kE٬&ض\kJ=.hCL$s%ΘFOZ#uڕ/%n뀴I{8`&w%ͳ}c~>T`)B|YWI92u'Kɱ'bGh>fźKhQmiEuT hpDp so,'n%ֺ7KJ+Θ^mL-+; u#Sxo(: iM4Tʲq1D9ϥf)c$pq{ᬶq(IL`H([L{n]e˒h:˸Q-梔K^˕oSt_y $&y yQ e+IQqRH5*^UirT_x$o06m"!q<0Ȑ͠u +6B)"ThQoU.=TC>K Gf)+{jeġe$1 *Yfw&E̵(񵺼@^Yz|hVSӽb\e2W!8TeT+7yGU+ "Uvk2,rPF?ٻJ#Aso rht"jovLWe?f5xA+!vtoqdo+br?a~Noh8]} 'NM6nz</*a肜'39.hd aPN)sLNOvs" UqZOHt6\"<oWksc4s>gY|JEjlh|ѓ6Oϖ?\^\*#:e.{V0V'ٱ:8|9IlFɰGwi3=/ įӍ?>/GG?>c|/߽8z#?޾x3_΢1KEGˣ`|Oд*47o>MO={YG^ tngŔ)W/φ筫@n#/\̚hg•z0F5?ӢO?N^ /'Gn%! e -+nHtIwn#(<0|"Jk iiFʝѰW$$j#}kgN{ H h'15Hy H-Vg!JJTy &5"k:x6=ݔ|ĊY)="e+;m4C(hU yCLfe' lQ D'~HtW *atsxV,ь͢1#I%I?֚$ E#DJ,дlmy*蔦T' )\`g^chC0I牧ED:7Hux帷Rm .AQl.)P!,& GCdH,Ee;F,&4y F*UY֡,k(f3 q/,DHi S` a -ODYZ4nڮLiXjlO⎻;6Lu=cN`dq\NAvvD MZ#EZY WTjjJ5fl܌{f\QH'Jbs1%ʂ)g.UP$fMϥDk B.5*|"ɨ |A1?UrgP_C~,lF ;N/9yG^i®8Ch|(~=B&WH@܋}5+7M3o/j~S =gp҃>6~{*{<=7xu"n9M QOǣߏV~ Epel \HPZ j T竗"]=L`O WxZzL< \=LJWp%+\)l\ȯJn \ei| W!\1ƙnHT^Png;KO@ܿ0M7#xt_7rfȖRi]Nd {JK"FͺOAfY˩+L70j{XepRK W\ !&#lk r}vYJPC,q\KźY|sjǓ<,d'Nxp˃yѬDGYVx'[AàAJZm߶Gvw\_>Wc<6L=ϤLs%cb>Oc]wg3$X'B4 yό RFϵ19byP dKP ,K-R7,5)d-R _ ,K-R ,K-R ,K-R ,K-R ,uM`Xj %yH؎lQixACNQ\rГԢ~! joXEzQ^TU kZ/F@zQ^TUEzQ޵$ȧwwvfffƲ8~Ք(Kd%  fUu=zQm^T}zQm^T[/֋jEZ=49m 1/ǝ$/F\l4=>ʼnQ*N>B55a>^8! {zjVR d‹r<tV%(={aq\~ ^ŭڃ1ϠP|)ȆFbt )}oC47sOo_3ZR\Xd$Q[ sky' g#˝)9T 6gB-x9xQk|9xlkaKkyV +Q2dAD< =lx#*`٬p#"aK|LTkPYDcB$FB,FurO+(DsnGFD >PB919a0ufF#~R|3^N8ΈE$9` YBB$A)(( ѱݔl'6iWᬔȨa*mFNhr 8p%I$Pȭi ԛ8#9OM>: C1 O1dmM2PmjIP}.Dস%2;A(ZRA<\ji+ҥ(]cpZ@:Cd=4w/z#H{s48sX CwJAYcl+ߣX?rޛr~Om׿pδ/0vJk[=`㪗/-7ʺҬ͈n7pm6dV%btⳲ(m!#A*5`'ۨ*=yHiU2cx),q$Nj+P6׮OdS;VD0 .%'~$rόH0 S dʧ;\`P`T|r3s0A]~f}Kra{P-+$\瑠It[#u\ZmN]Oע{tiڛn`'nwM¶EZ$D=`%LVLO%/Rˈ>Jf*\rlq?0&)8eJ-/gT0Gdhu'aY?NȐ|H[N$X8UgiZ sJ3[2Yǩ+1@CÀBȕ3%-.1xΒ㍑suyzC* ^(;Վֳ=?*EhƜ'jy_=b_dSZBE'jH•KvN^Pn!pAGژ$'mdNCZjsT~˧|{ܨ[f`Yo[s#c4 89uSrеXjO)!|9^4$i0*2)VGDTvk'#t }vZPJ"ub24D50I >4I(7\ѨRFS<$ $YU`Px%ƅhrŧD(f^jx&剱a9k E<|HHry1ʐaǏfpq)Z!TG/O LΌup')#`6T{e0D"",Na[55pa‡?ΆD:[jW譋:$ZzK>j4ָZy姝xyOW(_hT*[ Cɉ4 (VEm4WVLG9m9pϒ8ףZWDKgPG&$ PcSuf :8ۂ  @2j ȹN7|1,廓Ą?r!g9 cvls:4@"P>^0FRU#qErʤJa* \>%ݖhZ!9X>oƥo{lŧO_G'vn4y?nwnFGN/O7%t;W^6.ȭ,xt2]~g34bb0/r6Q}/)缁j3x/Mnv0o !gft?ȝ埗8:y~| 9ZSMŇ'O>v-ar0w&`UY( w鎋}?~}Xc0>Ӈ_۷/qKx9*sor+D玿|=뒒#F<憭?#_U`Hnu?^e\ zR5^ U\wsny %2R*y1r~QC} Zϲ]CyI.o{0̙^cavU;g6ll/`c/{xpԾYUٴSFH_";ȿ\V- [2ݑmtA?Q1}Rl~Mty7*?qqWЎ~?#ſt@Ri>'qn6"L𾟴)/qavLژ*NwP4UC'FfwoZOq [ wLuJU{W|i,Q=aeh2M̨"V]N m50݇af]Msq k \B& PK p 丐U9(as-ېi9 [?8X0Γ5As".UE (6J .X2D@#wDSR 27U*5 3%.ֱsYclʄ+QQ,ލ}xMk媡y[aDTS_ nϸ7~muޔ2i Tg}V כ1޳)=gL{M?<[C·P=6'Un9[ }etΪoE%7IpL")OA)m.Z__ gvZg'd  Q|9:e+tt n;mx<,k׮E NWS?Uµt~2bꁪ/ф_^,&~u*H l[I9oQܿ^ /)g+&ޙfgM'Ji:i𥔧ԀFʵ` ֘R"K-Uը9eFFJݞ$uC16yrPRP.9$R#6D4*gYUA&Z چeⳭPpHgTGE *^dTYg\îS)fH;7w8Zt_,}r=o UTk5{.UfndR8#)sQr=*GJ:k *(;Y DDI ɛH[V?i$ވT?ZѓHuD(V\W"q@AYДkkƨdbg(o*D,@'S69 "aJ;F@@&e#cBp,$"\ d%kvӨ5FRe>#?r{rBH:ś $\-9?B'"F1r4< }-1bxpgTR+CE>{98#Gp`JA.~Lz@VT,Ux`G}1,S7k-395!  >,@U.e=T] {Jc6Mgpf޵6rcٿ"4.6eM^l"$t03ִZr$݋{YzlUIRuX$ϽOfެ ~tR0 %_GsrCQ(H\K).cg3wueR~lm6)yx1&}VOx!2affyݎ?yχ0ۥ2Yp=kP^^=en[,%eXteo}{<,^FWlt,! SvgwX$ wkLqXoIZuMY\swF;[|?jʾ>ӯ y{ G2<}̛[wA3~/?ߺ$:#f~v!OID6oQιk|mCT^FW0(Zh;O\kޑĝ~O F+ʅ LXGU|Đ7jCc@cS @&/wfě޸w tf7uCa立c!UәǗʥ9g7NE@F^1' E 1XcӐq&pA7֫ZG*3΂@GSLL$K3Z3&Uk,]fbfJn+:2yfYA`P9 BTX֌Dt3% &'thIX7(O& 5f紕JH8'CI*Ovb4i|q|h-ݤ0`!4L߷(9Mx<0 K8%&R,zv@x2,1餭7:_ -VGL|l-DT(e0ƅZHKSd++J!O tVhb1{60ceJ{%ftBێHL@xmjfgl7֮wڴN^ױv`x筵5Ј=5Ir,DP8ci'wczlJ[-dTTy5!Xdr^.$ ({M684dbl #?NeDV3#;FyThKe*X| (lK3"@"ssAR XQ+#AQi1 N!)MyЂ3.8oDTQcz aj*|8[3#g?#>Flj8Yi^r*/yw8 NG8*Bp[EFb \GZ8,4x9x)|X[8u|xbg7bg=@+~P4U3`ݞtmIGhQ7=qF)KG 2`E[CWmX*o(#/ԁMtu` pa:.5J ]Rfѕ:TGW6=UZh"BUE[ *"NW%c]Bby ʷRqUr~;w]8p+t's_+y# $|[6;Ru%9F1֔s"΀i K#\qĎCY:cW\ Ix*5tֈʌo2|WHWBk Eth]eBW6M:k+ 3?JޛapqKYKgCo4rY-"mͳ~hJ>#*ݓAe#?s>2)'kXeқ|LxWɻkrbKᔞ~F BX `I,%*i9 Cל4p2E(TOpAh`#Cis-65rk5/f#זe'-SUv-|F3h=d LΔ㙎prXfu4gLJ֩S* 4oFe~x4=ovr?Ynnz)Oǟ/iL\}~`oxנ#8Y +j'Sz1O)y'|n@&Xn o&8<'_v9,~~?2٢kҞ X[fxD 32hn363"ʀu{n{*h%b^#] E#ϥ+y{2\BҦUF):zt ls[CWn{6d42J_]^_0D^< KMWpZ} :ǡlZp>tGW6=#Th]!g.bi ]etQ5'd#*gJS" כ)`X@V]4tFui:WHӂEt9S W@[ šNW%^!]Iʴi  Wgњ+m$vз Gi߯xXhQ s*+3}.TB !(A:^=ۋV^2;%4$Rp]!,lH#6τg6>wnF%EVNBZìrh$g,"\y,ϒV_*8`_$Ԍ[DW pUk,XDݡtQV]2LV-g"`Ik*ýB3`WHW4 Qڳ>D_dNWˡ+s`ӛ+¹˧8 Nq\yauuZu!: lk]#ttujSZi]!`}S-ttB^!]1&jZPRz#J۶ QoA;Ŏ̈0 WtFkHi:+iFx KU^Ђh:]!ƝiՋU>I,(i ]eL292ʦśut"t%%||vXBUFM*THWڈҘ#́Nr "m:iIetHЕݦwT?5Ŧû/N8㯦r.-{y]ȫ~Sx7[m7~(Q0Yɫ %w5cl>[yW 8 ey~d_~As n) ;oC\?s~/kBOʲGHkPZ~x[w0u?,`nzmHoD2# /p1?~Ի=gOeaځ~ˣ+XzNn-&Rg-7RO DB-ME-@PɨQ"θe}J0 7`W!^ kc;hE.ߦ`qI͏pp=aoeIȲ? _rɂݿ8[mY2|7= ͟ oh+xw>z/nʩx~m28sq?}Fi>JT1K[We~ k6W^aYWMyM~fqLLuN)g(-;VZkrjaOW؊I ]L!2tbE=]O !zɬnHՕ2LʁR hD %!k4VI%igKr]Xr+1@C6La@!h͙Bre67J,YAg[U΅LM58EH$1TƠu#8ƘPhj 8.תR=IPOeNyʢM P(I!Fc_#a`њUjmpԅ: S`SatVܠIE(`#΃lVQ['-珖Eb9#]+rb^f$KH$(޵>q#u7ЮfRKRy])4Tl'5DQ|%ʞ8!ibng0֍C`ԎL%}*&RYzjJBdyRdb,E,+9-tr>wdgYM$6@3 kZ@iUV%oؤp(]5!T=dɱt;: e8 :a3e0E愳 "Eu#t^T-W-c4%dL}7b /hۉ&D@+dL2Ia=I$\j$\;Sg5B: ܁O$Xz=?)xHK.HRO߮E.a{ou3޲v ܈A5?f6t_`*(O^0pbؠGѰL!VAE{mh)&`AK@dIT] AȒKȳ!E5Rj$YmYqU&`#_ae2׎F~պ:WwjWś)qxOgqmLi]$2:`6oy"補P:9dGÞCj}vDiP's)ʃQ.(2XN:UNs_Ν,uj;7U9n\f,!5}vOnT >2ۜhS*0F \0Nh>cZ`"B o$ϜϜN0@mz3ffq7΄y"ϩc Lٔi}6 N .Lh +}R4O)zf_h5W̋35tބIT)/D-dQڨ+}tr(w.d'Im>AR8}BeBNhu "Jm$Gɸ$3(!3k1ƴ:y`-E"f }m54Y(a؟H % #mfY+oN[IE቗7V]b+`G ëOWRU}F?Vld7^g(MLĆ[T"g;z"Ъs{V7up͏~ tf=&yO3hU9p SJp `{gѡ0RΧ{Cz..Kqwy2rnaY?KrL!g Tr]*rLLNBQ砘R!F8F()N1>նeDVK5xŠSʹľ>q5B)97ʰbe pAn"Y |a,Gp5~L AtH)p/ne˗_y7 oӗ~z:fb4|[_._ݼ>9z;l{$^EKȃR xlr!Ӌ.k E"Hw_SҼˋef73U{~_}C~JA 0Z~$ƒrw%w 7+KεBq8)dt3^$`:廯dNÛ``9_ߓz[~0~?o`8Fa4}>i2_#L?NمB_p +m4^5VH__ / %mwjq0SRW$PF gJXV`qi߬jXKtwַej|LL 6FM@#m7w +D\){co?4CzqMǛJ&YmqzWd7$֛ۂ)AZmmзL;7޴m'm0ʹ'شSYe C͇ۨl␐/FeMh5QCgٗAV]qbMߓ[cSW; cy5xE 㹹d{v6}%'``1$apl-Fg 8{&10]XMܴ8-S|0u & 5|1X_b;g_ؽu~{}RPERKIЈdܢq͙ ȃڸzN1NO/ƩA_`@w],58i-4`L9!z+y!Qh[H0ʗF-ѝ;R 9>x>Is5,;+?_?OH7?m!dIFIEK.BR(^ڋr<__/gQͿ4r7{[V:h^X+G,*FlZlk-N}SgT.u߽p̭=0ZsKniuzE.35"w6f-5c_̚Ddߐ/Vo>b]ܕ|tsÎýk:}0 *#hcyyRx.&3sݱ܉7V}YsCwO~hCƮ~?gbWU$U>_;R%hahv ݇!6fvlpr@x]}d/4Jrph`*G0sS aBsQ1u?:C&n}"e@d@1 \N[+Fz,/q' |[,Ȼ|uͤveDԉc8|4~кMWomRlXFhxIzd*5SA9"B;8;ЎhoPڐ#YM2W^ZfD1dJ!k8Z4\$ٺЎ}EeS=+cKZ]!80aJ(4>qsQƝ"vJAImBXƒtw)HKCtu{gQiO#TO IP]ʓ<n\+II֌ٯajg o^BtɫFn 2ޒ6ЦiXl^7K}dFF'KS4sBHe2%Bbe%TY@Е[HqiҜ`QHQԦ46UXh3vA C,XWYcW#g0碵qDZZڷuZC[8CIͥx"A]֒ȼ[ Iy DtZEe `ulNsE2!CȊ2Y51`D.(1f(}&\5ꗕ2b<X?ՈFF4z`=w&0 $\2c0h6 $1j2D_W8UdА\FD.z#ItCJo EeXUG˜suVce;3[m2( 0F +:=QGMga0I,)jq>N*&?cg\aݮ .5W~\>R -KG/|7NR9( e\V9r$r5B. h ѐϟmIX.@p7n37b?\l&Bd#ɓ-_)ѲcŢ$;N,OWutiEC*qjjlKs/OG\s޺&zXL?u-0z~qTVylEzCUlDejwguuWAK3LV[%y,i&@%ټumҀ<?kny'[Ul ܪG ǟѡz+){Vha(7!2E8 V`@<ש[pZe?k}y={ҍZ+PMx`*`A 7CW۬mpKn_n9, T}mγ_ԋk|5Xj02xyNzθ~4˴Mr\Bx]is)/#X\;`̭3&ȼLՠ9eEzE,WtHk4Ah\xkRU$:p`VƗ 34B)+,uSp/jm;ǍQhű8R2 8vl˱<|qb 95y(^o] ~\_%bv=tQWR?@a+wqcdViTz򥴎Q\"TE&^8 y`R o:|,kЪ*= hBkPDe` jJ*hp0bB=\0 p]c-P 44{`x&&+{"%)2u~U9/$u(aX XRiIy0|\ #j&_ IS䲤(*o1ީUQV2=LN Pa0莡TN0%w*%>) V.  w~`&`␄[3њGY]\9 zLp\ Nڧg[`^,OX7pgHgydܴ>N]fĜ 1=:5hV\'exTôc2-1 ܐ! ]{8܏b.b x9 6'@`yBB>mp[~6 0]"|JջmPfЧzwc]D;a{ms3lyyuk3璁f%&E$ N mkX'>`$F¾1e;knǭ5ޢꃗ`$+`.i5м[noH6M^F/Aڗ; m-5.$-]nkmkFp#6),o`Җa: ǭͦis|9^nTnmVU&l7z`ϳǯ^lVeHgtDYv/S/U ol(=1HOٰ=Ch[jM(̆)ix0i뫘UK;_P#EO+k@LXJ #40KQ Q?LZ סjW;IBz,mَp;k3\OsR{xr8 n)1JRJc(r J#!#qo3J6c,4m=?%he%ZW ɋRȢzU\$=n.%X{`xn ,0KJTJAm̝VY;:`CGdt)v0PGJE ~:퓾LPpٗN&×ǠbbjJfB0ވd5QCǞQj5fގLOWKzNpFj^ ZE6Rv+;վCO%'DW]u+I*thƊ5ڰ`7T.+QW 6|ۈf璟?}<ϖͭ%MSSQu.ȹ#ˬZ{6714VL%CW?6Q*v0>AFɤ +n4%<W"0Е0]!%4BLNWRNDެTIV{pӲўo{hMXƱ$6ʱ{?-chk49] ރEGivHg4wf5/JE_MzEVsrBuޝ C}B ~[:gu otJ(3^(mEry gUn]YX3!#kafmd[jI}Q ,R4#'ehɂTB,IF%DڤZEl*!dP OP%TVk+lML|Z;]JK@W'HWPR+9DƦBWV QN,Vt>` ,L˓+@ppBbpfJ݆^!޽/:=gv+=숶'ގ(u\NWͽ]3iBt9!¥6BNWRNRA^J.$vBtutŔ&ֈ)sQSL|&)[":S%HLThr;M#J>AX'DWX&d#\̞冩>Еd\+,GT ]"]ic5y @p_'? ҍCA' P^ D.\8VU"gUNIg" #( ^b6 뎃4?ʛAɍ,6_~TSѱANcZUF:*\xbB6dU`YY:"*(#;Oi)  YT K;]!J=XHWFy7]!\%{(@W'HWFئtt 7hWV Qj97CWtǡ# kW'p :]us*hMOQP2@Wt}*)K"¥<BNW2󁮞Zx pu.؈s͌rkAn+ßӡ, .'4넡D~R%+zaT J;]!  tDt 8aF@~rZg^}=x(' >Rբ)i8 بdTB+KE%D,z Qt2OjEIi KN+۷I7*S\Di@W'HWF clBtIdNq Q*GWgCW'#hu 4CWlǡg#E  spuʞuvD"3Yb];TnUBtE`'T  1K6vcrȌs+ʶva2B+Iil*4hjin:!B$CW7#;]JN@W'HW͡,B2=;Di:EHj vtݮ{6 1f$d2wSANS A|-c%! :}Z UD?L N,@YIhgR GHWښx(]`՚EWWT ЊuC@W'HWVS1u夺8!Zv(ꛡ+!B9}kW kJYށ@W=%ZGPN`Cӡ+D*thyt(QbI4BBWRBEt(c;i芣qQ 31] [v7BGP`CL24py2F0A7U<hIhZPM#x 0뽦r7}hV QʁN$ӄ pIkWR FIҕ6j>-@ 1!C`MZ: #0ȑhٝc Q ѵ#7  J(7dvP OG%0mJǸ6BRBWV Q= ҕaZє&l!t ՁN,Lt &MI 1(ЕqňPai{W]K6;Z]uBIId!]vWw^pnid{r}UvR'ky}O&c`XrеP-Սg|J(_2j.Ag];kP_G{kBO˻6Ig=0Lf-]1 7WB._tn&b97jŭRBڻ/뮵p&RurWΏvhpf16J; 70_L+ms_Nݞ[Mf`,ZΚdwH7}|E?j*k/&# ʯ۫e۞o5$ߑE~䲷y]0 Lr:[.eoFUoz%7URvִY9K%H7t=c+! حj\U)_[ fjfXg"Ѧd&r\\\!ri j+0CUf뀬-PRJk]$R"sȥOAha1c1^#1>!6T3F+PtRU=QKʇBn5D> RtޜyUUT/i>s<:JIͺaհ)=Hn$]QSQwF$֤@ts=i:wK9jkeߠg|1!<6t&$$XKQ,KBJcuߺq)hK -F`vѺXm6j!G]C2j(UÆVX9[0f %xd͂廋 FV+= JAuT5ZZGuiE餍P4BZ%;6:KPP[r=JNwh A^AkaFhvylX4 "(e*9<7%,d0 ǖX[Ck+.#ȌxVj;xX]AH` (JG5lm3TȀkt T{zèl#ZuA)% U#] 2U+wIQ22)ِM(pm x&$X(FJ(]鬇-PUlA1n2,Xm#d*zSJi_t#fH57]P?;14m޺ Dck!$ ( 2*ڕ"M44Gw6yJc6(8Yh>3(RAUvXNZ'$2cȝ &v/'֜3O~oUUg L0&`J#=Cw / Tl+MT$];-6*d`fDbQ$/n6~˹;g, Bx@^,4zV6 `PYPf1k昴59%kh bLgD~7?.2#lRӑpлaQDHZNF ,`]r5wn+*TZ`]А&LMUځ DGCPCzuz ;fc_XA|;l`u|.|uN M蓻~#P䍡"UA  _0ۢ#ɡf$Uca<RuI` W!Xڨbj#cR=ڂ ڊ[+5c-Z Rת%_63T;:\Aj},ޣ]ےu~=801>@|5@RMw*Y;tڠ@Dxi֚&-mh-\kԋ+Bj@SQ A蒑8I/{D66ЉCX1:B0d@bx|A4T\/R- o˱XTZI'#) -]( pf@Bׄvu3R_]wԢ`|Jُ,0H5zAc,nlv㸄<>fJ4Su41~8rūWwmD)8&˕dttc_no^V|ARح6^]Bi|~'f0Y`B?旹yտo;Km‰} _k +\>j˯;}l{;݇G7alwyw c܏u[0H<ÍU })@ o?&'pc q_dAΞSM0BTÄԃrΞ*¿իr;<Jq6:pJ#>`,#i 9)M4P;6(4|h:"bI ]VXQZ#t BC ]1\뎅Oeuʅ܇ڝ^6[ټ jN>*~V9RnE]2k=]^X/_ tvsozt)B4*4}~?|#;~땩ҷwTپ_W&3ET8:5p5cG5GEWu1\}4{]>s? ]{N66ٻm-."ߏ.# MKH*c14ekfly,[!%ǕO?..p (-rI{71mxzHWNX)RnZ) h>dg%k3(+'y9yV UNVZKV6[P(IOX/T*^4 @? g}7io0g;Oj:B A @ND ix R @y6Z{%m8 7_[ c}8")i?s}u25]}f߿҆ TuCW;R-M5Tp7k牭qpyNvV4k݆evfe珪%`.,Ux}KRjt 0NP•锵MޮJ{Jd/"KS$D$qVpuMADxӝYcWKSE . SD(q46ٿS׫;Lyȓ9 eoHwUI!jP ~1'k @be1e3wdL{W?pz#Lʼnse'N0/V(J#K6G`_, +i ]7Z!NWT-]#]Q0񉫸@ĒK7ZkGJ0獡Wt@IiP $Z>CfF90GWЦU@p*Դ3+Nju3 psU@+U*Ԩs+)eW{^Brjמ[C^oBaY~ .,vj|d!$[(%do.Dd`J.x-u7 JޮI()RR7$͡+Pc"̽<ӕ((zEP~R#w;]+O|j)(JR3%Jt1.Xrp1o ]֝JZ:C"ӚkV.92! ǜ[Jiqh:ֺ14 p)M逖t@)pKgHc$'8V1tp BMU@IUKWgHW u vvRpn ]ZMp*h 9m]`! 1 HԝJ:gIWR\6qw2 SЎӰYrSu8N"gxɎKŃ%-4Ulc-:f)v7aR.2 xĝ(C4b$& H܌FO;"lkRp->yB\)㦘-Z@@9R(D*V1tdZLju{ooKWBWJ=PUKCW-uRHWZy KޜW6Z]{ Ӗ ]ɂ]ϴ(XS].;rhvGCY2d -]+z1At%)t2Rw (h PJsaªϗ(8L2EXܿݚ+$Zt+uSh:=\@4}4MD40kUR hEʀRіΐ8"Ga p)j ]םJΒʿ/hi\ELIOr{kSқ2BքVMyAKq.HId`JEcL2.d&cY s pOFrh;]JpKWgHW sx4UEW.!M޺ (Eyt | +LQs9˙=]]W]D8}ԳSU9gӖN*9PҚѕ*AW}E֘7B5\BW$un;Zz"` h ]ޏ^u+@Ij<*rUS$B%'pO+&xtiTvS*1 iLn M4ZN-M!M3 +Orpl ]>Opb #Z:CSDDWne1trTw (sŖΈw=&5kAuQФ >[hRԈlv@7/OT2A)F. 3z8F^8hA}7B,1#3WS=;PS7ezes:$4JTL" 88eucr_W5- ?ҋAc4yz}0g?K j=\x旿CdKG0~&dކ{9MwG@@vWMu$TD7+f?_97iE>?q5ƽhxyqH,Dal|̄MI+`(Rj(cJE"&IZ],܆e>69s1uQ"\.. 0!uB0% *v$IM &ZÈ7z)cɵp rϗ9{ Lnll]NaX,.M7e((e''P^ ?_oٵ%<;?̮Åfz&^6NӡLךnߋuUA EB_/&p2}zYk";.ꍶܳ<Þrñ!II2!DbX'`RXP%L{JlLM(|lj2T 8stKuSn׽=ޘt>8:>݉=`ּ% ǢD]Z+:tY -T~(~}lx↞X! = ]v5~>j8I[Rnz]^󡖬k`Jj:M`b"ǗZ2z3MBE$_ۍ|e3&g7Mвz망KN҅{L3q@*"-PpDŽrRrЛLǷlIU`i`p0c,juDWذ8lv XE_C|*I: f\HNBrR7{aY7׬0Y8i<0usp-X$hIP$ |^JEV*nT=V[,&;c .Lfy'hRXₗk-gikz=H 7뭭;?ovтc ']cvn*6 "('hawj 00&&kKEem؅h5`Rګ'W9fI!W#9"6GrDP$ *zx3ŕߟHRK##1cD#,$Ic5y}aeOt>W el͟PpI78v,:ǤO.l)vRR DpC```,%aXR11TC:1=3͂Ho&`۫ g%WffDkxF n6gpw  {jl̍ 8NHqb-a\%eJ*cS$S +e.~}x3Bz94Hu[W†bIn4UѻM' ;}Wž΋_ Iv$t6,MAv|Yz-΅lu' @/a.bFě;;8̰;wzy=Nv_7?oWY@|.˻؆)*y0f{2;'qdH8|~ʥQeyFv0>. VC+LGYʴϧZ8Y} i7S1*qCV_8fwiFqiƠ^5|wK*ܬ4oyjQ9DG'))(=~nZfa˴ڦg"42t_m`$&œBf95Ӭ͇nHfrp8JsvxvDKg:8`[`kwfyhcrdLN 3r>x =c{Pl0Q2ki6F͂ G zQD7AR2}GeṖU`@nyP|X<0ة60 MUbXӾEe3}خxʹg3Rey8+ qce, GcĬEaK8*a.U)I_"r*Z Y ǐ%kEԚ -'8kAJSq }!jbC ՂQ-IzaCT*{G4Wqd:ͰB\US(S(9J(E@G8ȹ.r`{%| {+lヒSso[Z|?~oxc3*:`k(됋{uin~՛|qN+ NDPhn1yTQM,'3j5U13IÇsG1;|tjΏbж.i\ȿ"w yIQ٬l 3t4e_ hiAmRp\+o)p)BQǘX;<$eFXp4)ol͹ٻ6$UX܎`d&,ج!E";w|jYt2MY3隞zI^<ÕNr7zb3d1ߣi[G߽\wϣњ r}8&&}*SXϣ԰ruNP|^Jc5u_w;X=ԓ@`ɠ޲Fw\I5W54J14a?=#+kJ1QJpcr2!9`tɨ\}T瑩1{eIK!b&>@PcS6Tea}ڔZXk"DFH/ꁩ(VƱGGτt) WD!WϥDVCo+QԺo+;l+aZie,.te2Y.6mh=Y! )\)"\}IOV#Bc`J1Bf^"g9`&\f12:9Z+=hngԆ#%c' ~"8~;xs:i')vښ۫x: U 1}v+5YkѫwOM(e@PQAɑctЌV؇Ћ|?:9|0|Zi|7A_,xj:%3WAqZ(ݡ}كZ_C?w_ ^37,hBɂႊOG  N|rTJ\/.0 v-u6}J5.v3|LD?iDJ~-pfyO"?!tT f;pnW}=b5+¿vJCFҺBzҗ(MAb.EMwuŒU1X;ln{wFٔŝWxg٧I!og[93o*p=P#v1Fh,4']k=iSALt68&Q R$<{cC[qL>GRIR=FI7"^׽qk>4Ca|LBR٫>%OWJNpш9ùOlBsBH>9@Π#)@#r@2D=V$T92_Y]Y9.{^Aޜl7y+ޯ=,qs8|ކic{[tޡ#ٰ\lȍLH&2;OPCv(xEd;c/Z;6D^v+/- skc2d%EK񘵉<`@Fl!>S;6/Qهk@i7i:YU#nt<*E$+lA2g#WG wK@ X"@+%h z0BmSvA gI 7sϤjkjl׌J5]X3҅Ztu7bZf̮ 2^S%;cjz&O~wAχo\c+aѥѨ5:iBHe2%b0.&W,JŘ:[K$ AHQԦ46UXpf,CLYPYcW#gQZq1EkW}6v`p|{E 0GWE?$Soj R3-1i)?8#P2Lp%G5Rrth:#xq箙[+*}d DzB,pXiކtpf|mzFoH{E^.~[rrr.|8jSxFG]32$QVFdZ9k-rؗƯ^tHсb_#ÀՉt! _z;\v?-U a}\'C_ud~vbŋZt;hy>^}͠krm 隻NwMޣqc}Ad$8 _{Wo8:BM[QyY+2#FLNTKhxߨ%5w:}̃oӗ\Hw-CR:dg]4432tu,J G6[W:2A9;Y`韰2Ee9}P- iY\ M10彽{(:#6{֭k=gI&U&oxxfmV{Y' HTVpHF )e68ppL8}ʚh]0,b:̀kL6,!Y% GJuK-F,,IDI"R9g9!orCt %b @'ka1|=%ʝձ>MqKe8RDג yΕEV Ju,12+avMzscnUTg-̸N.~5ġSև+_4ܟK"wnux!"urJ=.!9^]Ԃo> F:Cdց[K=ND,zULI{7^6v{R.$=,bUiQ46ru]?ی}7wsk䛇:;q]eɝNwmxM! %r65dj4:]Od*m)Ձ(JGTsk;dat_E//k9@Z*/%tֱ$´j8ƹ35] |5 jRrBsJ" D 0B8H2My|j#\ ƭY*}%s8y8spr}ٸl#%8Ӄdh>VLh$RQV&2x'`Xqe)M"%!B.dlH%#<B p!ŧQ^u0=`ص{oZl[u (%A3`=U)3ZǾ=4 7G3 !g+G_p5؆1If@"Z^:lk-Z۰qqdD2t:RiSDi0Z -yHڼTĕAH!Q0CMYC3%--1xΒy;:q4x\^qcpfjo#T|σZ(t_?|1Ee=-t Z4z$Tj_ڀk`\amL$'eΌcjYm8baԭ2x;,7fm1r7v\2? /=ۏ—CE6NrP4FR0J7"@U]z#L;=jzDN(=c8$jIRXHHBD=pEJM(Y'Ps<C~DL &aeF1RB HL 68n9y4[·9y!/JvoZ:}ә'#hI LžLN^qAT.8Nh.Ef5k1kMΏIg:uk=ixCo^]c޵Y'odkv5D jC ̖6&`;6`;&`;@+lJ`P]*VmUlxc"2Z]]eZu B.Y\w]+g&eew%6Y1i N;!,H!Qh0>5>PXR"$FU~zz+C5iFLƨ Iw/5>FVM?C55caC UFiZVՕЂ& dcUk) R]]e=Gu3hʀUs2\m-z*-z mu=*'~pOoŸ,p 5]p~=Rʃa=WWQpKzE.c6FiP* '*s1Y\~f 3iNDt~ՋNKba#{սTWc\DϞKt/NOsJ9I ´Iɂ ހg:gg! [%d++w&Gw. Mɀn pn hK?E>C"A'8t>uEB2\՘оaUWP]hduBTcUKڗ_(o3TWpMDUvr,f1cѭ1}+J:lshm5`<2v{j{Gn Նjlv Mէ0=j{ܖ=CQksTWve:jfDeVڮޜHAj:61j:LyStF G9QVM?C5-j&QWWQu q*T-|JR&iUî2\0MQWVѣe9+ r| W=E0D`QuƏNHD[$ƃ^WR9"y!M>|P﷨3t)rI!&}[+b#\^ RiX%KJ-Ccp2E2RJ)tLL2s:'LϽ˪TTЎƹ{fP'9!x1`:>[w>l/IZU G NsQ茝bqTV0*,X7(ȿ W'tgҋ;V1~񯥾EI Y\BŻ=@Mw@I ?֖=n[Nbd@Nvʒ38g;IFN^R A2 Z41P 1  )+P v 9'EEuL3f<>e/yqCJ)u 4g*ҘȌc72J[_Q؝'qg9}M(T{qʤO`v7x٘ cƴ#GDdc `5djM&D. \ȴ,j91F)`ޠ)bA)j-C\ZSspY8G-"CJrA!AԜ[68&ZhӣI)zjiH)[~jW'l.Cw@E䠛5Ϸ~d^-wS;_㤳x&X,XFqp%9EbT.RIFy`T/:^X$-7mCH[W3P fH^h`Zx4PaE#7)EUAŁ#@ pvx })[rw`e e$1p^%ˬ.좨s$ 8vK1$iI"k ǭ?Դ@".hfBHiU`DH$$L2P J#)rav ;*GWݾ}h)#vE*gD~-O4Qx%FG.NDH8 fFrr=gϕC泬:ŏqR>ZK ,w%??HZ^zJ&' d/Y.SMqi{C,fSy:/Zn/BxWLO0tQamȉrNVSOx\x 4L97 D+Pp9j=!HrR{! C&+3[_b/dzzh_60[aC\ mD'n{1kJ u:gK2O9),r&ɩ8O\G4jw4{>S៕;?U7^^O.gΗW`$G+/\O~u_Nۖ۶]rAQۋu%M7,I%]++FpkY78hC*-[1~Ƿevbje|y%ו f U㜦3H.Xe0VDZ"3zJ4|RN7^P&~۟.|wx'TG_:?7_%8DZzo c{ݢhUWX޼h uzYU^SʘŔ[ ^zc~>\ {q4'L5"f•u?aj_^Q=T-**]#ASE^U_PƃTA0 l_|>&H?s)H]RTi5'/RHk)wFk KxqEIBMhmv6U^Y/pxhMw\ӡ ? j fA= €8 xԨDİ]S˚8N{oJ~3&w dc/3\71 Ffrom"ZZ#ӻJV#TтhlSG4Х*)}Xdu`𡨪t:bhMϧqwƇ_)zu'|J?1Z#%Io6M7wOVCXYg4R?5jڟTbU\RM>, L="2g3E ݨ)'hN,R, ~o:@/}lxYh,FpfHDb$Z3υL L ,!2d6mx}`zyt ($uJ13Zhch`4Ok PoVCx帷R` i {'Kj v!9XL@)"T2YE5ń#O:rF9*TbPTe~6}}Dj/, V"Ns*A)2"Z9ҿZߍNJ4,(`ɱ8jcs}'[!isr)i1㳪CtW'kpUZ#E)%(kKmE%ȴ 8y݋W8:[Mꛎ =;Z8[p?Z7dfDDGOpG,q?DM( pjq KF 5MIYj%u\JО t?UI$%HI9wemI ifM}ew<:;vXj:DX oV ٔZ`wu!*ˬ/Lf0kL4t[dtGQFW[lTUR'ȑi:9_cBOc{GEdR?oi L/˧ˏ[-6[aa' kn|"qٟ<)! `}b9_I?]Eo>^^mHWn/]yJ-_qP 8H8Lat'pB*A]i23k6&p+{h܊Kt_i :Sn.Ұ7veY!u^Mւto÷ëV}SHayyAOy;8.W{;X<*Y ;v; mTںvHiJ:~2R7ލ? +njTYP΄O؁[KAgG r,PXm*-)Qx}kT(k2fQ_*KbT6ey,rs JKa4Ȍ rR RdPBӾ"8j v>܅5|#ЉT0Ղ&V9ϸ7c{&GlOYn%L3kO iu|M@gVn62yMs~Msy/7)l ]1o6ξy*9Wy=vE\o W9I{򢡟U^\ʿR7U<| `*TF*&\2%{+%0'(7:Iy#2%x%xrފȫ[i;괎!.+#3NL.ƟL* HfRz(#i(dNfc,4Lōd.29}oc2F ۓ'y˷y- 4NJ8ʄ"JmIIqYIfPpeCev1M:Mܼ`-$P [ x>G_F5 r(M GȑS)g1')JydL@ףT>nlL]M8ob ˛؀;4cZqr|ӟLfeT!qMEɊ*<X2J28(CX ,r0`XZ3LHP(ALxx d$aMY> Aȝ N+>p<3RG(`kR427 瀝^*O/GMYKwƫ e',nr|7c7F*+׼1^}"j ⫯M# t603+ne2Rm@@n9hm=\Ml#?A:" <p& {n˔xO8oP)E=ɦp m b?b~ {h?^ޟ5x.q1w6;}J2*b|ANP砘R!F8F(cy=4S bx<Az% TZN,5,< rHyYi G ƮG#?MH5C7+}7!$.zS[;nLHDg܊$/׃4y8}-ɺ80*-^zxvaKʱ_]-NۺX?XϿϿ}{F~_M]eZ?[eMP}'t?vٙLq^xJzx$_X__xZX'C)C=",Vg(jjҾ^?$԰8Kx oirML^ & 7PA掭.1t7%=ث[`E1CpIu/bjroz>m>mAAߗZmeзLbofQ/;f]oqmbL>,["gmktf^k]!0VwN͆=}9ݝ.h7jy E -hjeMyE/OfM.=;haϿcY˒_hq̟b)7Yn3m/N =dﻠ ]KhDP<2nрf\sB`, 6ZLkƭ>iSo0lP;5@ Wx͙NZ4f9 80eQsu鍂Re=SzG*2C~S;q$wrBZ9j^W9eZN^3J<.o򦽱ƞW51}Uԕ .++8i|K˔J}d۪(z#e9XYR`W Axd$C,׈ d4-dؔ %p̡>Kr1#,.sXadۘ8 Wu6)|d2l$,N:j'Oi^v$ߟ?X >juu2͖7ӢS'5d|>U|4tyY zo\^ތ!ɰH *9U.zL#Wz+A/ɾ^|d+3x5I{2/ ug콰( S Xლ@C{jһTg NZ~}Qm'/nV$5=gkHxiyb9S~i宧+_;=6u\yOSGv:#B6Z$ٿ)93' rO ؞>7 -nUց]/Mtܚ;EEfDgKX;wu5ka[[ýv]^w]:s~6Uؖ0ip77C>RLR*ipYLv눶Js!j@KQ(@g Fp))S8cI@ ΰ=5; nzB0> !5lD7xxqVr.& u>1o8@KYh)4WAhR\gt$ϹHu 1|1 \NB#&U oY_<.{bAޝ{,Դî,',2F?{Ʊ so-'7= $0*sMIYv[=EZ'ɹf믺GI`ϻ.j~EJc~O9sqnԮ̟ϴA|w59, 6 }n3ؐ翞bK"v-D$K/ 9zfD2fс>1ɒ9RCGHIh3HGv{ʮZ ]O6}))+qLr.cY&2"Lf#& FpMil.^˧<$HPH(=DLlĹPm4ՅĨjzkbv$1z\GUoEo l x7JVj˂N< 2g#Wi()HC4upi$(f ` IŘѥhrVQs-\&ĹqjXXM3vU~~=,=>xU`0 [FbN?Ob??q|4}1b+\C)D!h3BH !Ld}چh@*wCҥKF1BRcЪD3v,MRfMeĮ&'q jWӎ]QDz%fR66H.qILJ Rt-U$XJR Nm})IfD<sN[iYY#`:dor\}4]b1hO!jZ!_2YĢ@Q9Ir%'fCjlDڡ=)wO] eI kEahhLٮڕNGvA۔5}ZNٵ - AqumS3r4Y̐{k椨F2!aB&C"$rI'{˭f 0LdvvHVEfs|}2&m1a0, &%+#,h9`QP/cIRh8)rFwd"+A$DˤA+f2I$H欬lgZ;0eӔt dYqD.A&)$-r9)T 3 GVQ3"I&P>~/-p_26bKm.*S` ],9MI$L2{Uj٨z?ie$xSV\FoaDC7ܡ4t5Di=q3@ЃJ0)z^B{`*<NJu*ju8 Z 6!M{ȪMD,K7FDġ,`G 6!L`K!Gp}Z6-b^8"S 'UUnT`mhF157[`=j+8gg'PͣZyۇ=UЪJB&#[J9˓6Ȧs栲FY5Hk 7˻6<܋Pudh=tv[Tr QlQާAI 4&!G^7d*_n]Cv/WlWnv=ft7Kڱp Q5\Ϟ$I|\LF%y6! b;9$˝@&rNmC4T .,}B%SF 1B6:Zc_2a-qnŻ H]*5fnhlG\B6E-~'MzdVt9ƭcώnbGؑ@kȸ}7ȸ`Eݤ5qZdM5D ~oK=+Xq*J/pUUR \ Y\ \q5*Ң:\)-IpuQc̓]/\Xr8\s~[w~_ ubgl~hI~Z ƎN#џ;߷F#&{EaEZ]"% L+%8''Uq_HkpER^=\=1{W$".\5w< ;OРpz}탩;~ig|Qɳ|YcH$mCw|2Άa Rx,tdy)jfQJ꾬Ij%tڕPy6ҷE/ְ jQV?8,hd  `ߕ.pfͧhEƖt^cVc/J)o.F0k]A(&wpmm4>.oTqL+9(^\d哨ӵǓ⹽mTtoѢAtON7FGߔZ*3yE?&(Msܬa͊{ެo8 CAPW?s 'k/LȐ\֘pn7J z2KI[Z,向c \3ܾs<&e<>]=9i~]~۳7sR qHnh)\[/(ZJ+ )9f Urv`MЖZ1Zi-}F%>iM{?읇X-CE<A| 5|9qw~p9ў>6MH_|}|Mmbt+]h,Q= 7srPC&LJCHv)lbu섍ܘ`CdɏBBtGDO^vi eݞ8[rf`Ǯ<'ksŕ_}s)8ӼXtS'2%KOIݺw9f2תY'1{H5m:_&⾤uޞ>iV2is毣/X7 ˝I:CzWz$sD8FTq[RrjbΠ7^h)E k'0k6.c4דS򸛉!1e9ˈmB!3rndGSVZ AdU2T /9LJNF<䠘R!\{DX"Bq́Z/P%YI2od 2KR<#UR"%9EaR})48Avp+)SQ|K`:tHJ?Nɇw[64(AG|b%n#&aLxfԑgᓲYi2>yA(h#AFx:3Ş|ly7MJUJ$K"0,S 3JZC?iǾ˧^l2b%VNx8haPOfϿdL}. 连hT~?<}Y&}8)1v+qd(MkMյd]nwZx.WLK~N6-)+h$h6.ݗ&xn2޾yŏ_ǯѳolƣ.:T_~q\Ͽ.s9<I|Jc5@dwAesvl+-߯/z%YCٚ)v5Y|rXym"WHj<:{C&PsFC1!Okk3MA%5:r+hL[nگ.mBLzsۻqMd.+pY\7Y}oVF٪8d֘1hǷ?wˣjf5|M>hIŇvOBOo֐L)Ő!3|:8/B/'\h./4\T:#xO/ӻu2?F/8yd؅ !=.A BSMc; So瓆aiy{I"4TQ#c}9YiU5$O^^Yn _N5\u'\N/qmd{'`T+"_Ww,aţtmɬf0 Їhs&y'K]_;eÎG۪%J e5>ve{: rLc]'+YL7\=\W Zڂ})s1iÙeH:4b?lO t(b#\^ R ,dq"8 3M1h:0<OV_c]'sTl6ϐw0.yÏ?-] Pގk+CPҵ<X>>g͏_Pv0T:R<$`IO1ㆃNA Up %x'5 1;3߀ٖXp =0/0Bڴc0چIf Z_,"$cBT[%H9 E<@eY<) \1;TEI4BD t<x Ҏk3OiJ1M26x0I`X)6,Pٻ(P!ɰĤFA4`hUh y W"QF[1=9tMFx>}hmv%]F۞$:9o<]lzj6gjt||sY) DmN`P#PL!U1Sb U2z,2@E8 48Dm1IIX\R*MJ{bJ1_XL3|rξi󻌌25ڋMBl:>a׳ozr=z/T̢d2;"a  /N(>10+$`pgmrN0#c"^8=Jy*^viIv@u/ h Ie!RmB+) F K,MRi띑T`Ȑ =}M+xs> ?.$(a1qv&ųc<XL?Gdgx[A[j'Д`B r&)@=s!$9)8$ -˹ cD484A qy#*Y3CeBs𙣳tRW_C8Zg1-9/~/n&tT#!-*"0 \GZh1bq?ԅLGpa;xZfܹYlVH;g?P#Ƨ~D[ `C8ŴuVZkvK,wI '83Ӯ7٩Te8TS%:$48'b-3@URLi m$Q>9ㅥ9(9D^e*9e0q(1͢JQjUSDԗ^ -&NCO營_UeP,8qz^*MCZJ>xј% FlR{DMu$T{I2,ώ%r8S@(*rFwMX Z"vFP*mN,vVL=YOµEQsgL E"dM8U>XJ4EEZ2 2%{" :@Ix{Q_XA+6 >Y!PUJc,#0hgUj9N:I =qFϥ3yGQHXQ\$Z54F2C&%M{s!u4@c4^ˈHlcN4s1 9:Xa`_",O^'`i(dtۥ( y(}B9䅒M$flU5>w8L^Oio1{.j~~/M}#~|p(rqƛ(qOѧk3O?铤WڞӤ1ň1͒)Q|E^f^g}WolWtAd}Dm^d]휉Ű|3ȅ]<כ|_TmoJMV_EW63g}7񇪭kt7m*V ^mOЋjc3>tP_vWL.!uYλs.56 P_w@"LQd:Rj N:99ϜR<+!t;}ρF%=>X2D@#wDh)Q)FJryz:Px^tM"{U˲U6FUWsl=uU}QЫoƓ@emc;Z]cvQ@9i;ǫO;{}hΰ`@|^^"]Jڪ@hv_}c?G/jxP?AP;^< _Ky9O VFԪ1D.[;%$b9eg)oec{S7B'%`R)(J)d|`v2A&Z ZRg-7ROD>B  (byyQ:DfqTKՊ!.H.w.>ihi/ё=i)vڭVHőKd̔pU *;Y ?Q^kaMgsZ$Qkay:0<2V@5>L' RFϘG,ʂ\]l/A4' dR6:1&W%bR8i d%kvӨ5FRe6'нiN^H2R욷w Ih-8=OnG8mY%fSOPyiz!O\#)F02b8G#\s:kA.(LJz3y&'I9dG 7ŷU7,]eQNfB>>M=4/|o(yׯmq•ۀ!&b-5QQtjI^F#,w)yTY|QM {]͌n;oW9yء?nDLvf͓BpB_(mCm^FW3(Fh+q畸ӠthJ\ARA; ,1ZQ.l`:RFŸ7jCc>MPv xPJpB&ơI[:G"1EJ"K1#bB4ʨ4u,V`EVu҈F>DNgͰ}Cg:FE- N{PJ^kd 1:QfQsiYƁ X8K$hF`4 t}b-)/Dxry1Zzw8<%"$o AX ?{WFJ/"̌=@7wc !O+d!$V bYEd++)IJq$x"l5{db|BSv%[tR ِ4C`N4AF *X4ɑ $-UZ+;Aca(bܣ4Jwn ֋<#HsBd P9-1RpҹN1%F,F(: OB/5=7OTQ0PHI4Mh| 8!P2Lp% -%t_{sR4OԥP+qI.0KjL`"r\sya4:/s?-/cS=/wV(f.ZMH;8蟟NR y&*6hߘb= ,v}⸴?;QeņʜwWۋkl\Ts»1J'Ĝ<Ϟv9Rƣ_/Λ wFou#v$PtnnkYe$1ƍ~*hu񠧗cN'W׎*ݣ.&nus(.G~ZHXs|G_LS5UQ}xc*9C5f/Wa09&vCχ>|<ǟю7z)'kI0O&O"@'pcu[CxI*249R95">/mAXnIw`tjsQZ.wMpu=U\&1?wL0^ʐB*FT K=4/@xl7jg;hbg.V^k F2:G9H;+kLVd,f :饝 o9?b9dvJAFBv!έ\n˂wg:Tˑx;i9ޗG6s v\x' sdh>[y1 1bcNH0FD>cEZ#҅즰Ht(mJp^cuAL +t $,J',!,giWζ 4;^XXm" L@hiЉ4CND' Gk_5uSe29ab]s}'Krt-9iG Qif2,Z/ Z.q{*Wov _O/CTFr9BMQz™̨kgQ;WC@m UGQ+ţ~1]6ψdbYϠࣳd4}y~ln'C7JipPO of/yDkkUAhQH1@$EL@wZRPw(,5԰4"\HH*hf2Rs:Y(sfR8UW^JRl7AStѪq}3Dz]ݬn}|:= 0 1C V2V` rT&q6X#U!LʤP` }) Z9A2\AA쀻ж9 un Kt`<8ꯕ㕛0}^Vc.Agd]LZt_ukӫ6 AvKgnv%#y,.ójzK̘ XcF m26-n,ܹC KXMe̱27e6r;Gϥr퐌xF2p%u܂Vnwu-G3٪;p(qFih\U R4&FnA7y,ۘ×b6{$"灁hYɵt[>mM"ep뿢mĕIj)'DXȺvr ˈN\H2]Qj^s0`:5ߴvHk[Ht>e+k7E (Qk9zQ6'mWeڂg3 ![֟ϾFH525$'k/5goɪن{7/7/Ϭ|yP4O7CJ:ɔu68VJSb LQHd.Ɵw+n6 l1{"TΗ/o>}iT|S/W L,Z<fe\/)y]5JgS>-82QVwnli~X>jepq{s 1mcc֫޺;1 ƶ]#qr]w㨕f;Q v1ѹc6]z|L 7ꪐ徨B :u ՕʮƬ@UTSuYn`z&$9!ق ƒSJ0YwU|vkI9ߣj.jh5=HyikOZZh썖./Z\캖.T%~=ZZ!#uU~km+"W /Pv^]*WZv&_hlB.}QW`; +N]u z.eͲWS7~$)0(A͎Y:*G~ƣt#bH, ZoTר>;t֗=&k$${it9-CT+o`*MsCˌ蝲& ү i?qRlj%ɸ2(7Wb7: ,gnzx< 'zslZpǿͶ~ _zǛ>q$e=}/AMj4CP<L6AF *X4]@ɑ dQE hB 8r(ͬmZ -a<ݠsBd àNK_ҟ8\FI#`eXBowxpOTQ0PH/T:vH+yyB^ktДzZWb0kk5K޴8IǑ g\2ћޑRrzn:M~7='xKC[_[D͒*8]`,y^ Em b_loD ủYwwd F]I@}f5 HydSj9!эY29čMIC>)fMѼ=]LKOžOG WiB?0^/UR y&Pիpm7+%02~A-iyr XFW=^Hҟu͕_ /go a$/N9w&'ubWnu;ȶ^j@lNս崝#cM|Yǥ/Vi4ϣb67'5F%g4jH'il2\dN~z_~*?߾>?=N_^ӊzJjVWW Wpx6]uMmYqQ7J[=0}6Dڙݒ[O )ߍ~z2Ym󽙟/JZz91jG<)YWHl练IikRY2ȗRXjgGAZsRV-@r߿?"MG;JgÍ BfFs!كvFkeB/ gp6Kj:ذ5yU={13q[9rObW!Ųg#!씂,0 (Zh>p&su. z~dOgTd3tW Q>t8[okvv¶ճ K_XixΪa ,m:"D+@KsY 'A}@yC9JcMt [SYLTVh>fRaN$gLuGYӁӟ:5JDt9G!|Ny)Xe FXNx@Րbxc<-C D'v#DFty~$SM}5]6=wm~n1{?o쇲ܝ撽|5S ll D8vojm^T :nSﳔ B&A7:,EV#,]W5NhJvWyQL?!V@m|ў1>2sT$|ξ.%+@ 0d% ϳ!Z7̵V7a2>UցW+S =nKa;vEp>},OM"cpէ ^FY̬RzB@\ r B); o#tPvz5ӝn7]NtM퀴 iX/`; RE\_"}?^+">l~R1~Z}X~w$Y3PdpLX-o"ɢ㙕^X UZ:4: )QJD45*H!tVeiBʦ"%@ XjF;2#_sR Rd ]G;8CHra.TeK2Şf#3Λp7o>Jj+ˆOYngf.?)Hy]5J)6yTΥ0{mu&o'q64wf45y|XoF=bDc!<`hX_՞UB_-n*T*.|%`rsS`}6=J:sđG-v{T^HJIZǐE 5ded䂦dRYh$ɤp6v %d$%ecrYF,Lp/8DfG ;Fs{K2/> I~őI)>5d9 \<֮-VRNW/tn %3ѫ 3,E Fq&wE4TX ܡpGТsy}kto.=q {:u{ϲɌ/e]wнz-l͸^5G"hc',ZT9SV&#5* ꞃVg6Z rA Y3 8G3hUq֣L)9D1&!tzH)l̚Hh_縶c:}6̶F ĵ_ [S(l1HP%&"'̤4)9(TH~j[ˍa0 `P0}^ J<v^J.rHҎ\}(Pg>>Y۩~Xg`pOKs[mq{3 E"Z54`33BȳIa.x MhpE< F .[Ӎ/gqg<ߚ1BJȚkTZ3V23/ d~-y i%iŇ6"?St0A7 -ˏXy^dQ# Ri𡔅0Aیy$,dw%G{3zZ.CCw PԵg7w _ȎRFL榛MЅjU/o݌h.``=^f4@&/!K9d;ct8;tb4^?eӒ |LOfyɵ^5mYRb)6ϵ|:Y]Wa~~9DiUW!8cd>Yw#M)Rr737 K0x7-2…ok,2œ gi5YYʫU4z`p(N.֥.bqu6ǘw,ỎY:ӈV\ڜfWnA=j.v둋h_vIVu?uHl.=>#F3}īt}%I[v㑊\_z<\5yjTx]Q)t|o?2nu|grQ@Su!wցo:xt\s]3xͱ(]Iy*:utI˒7"(3nqq͙ ȃ@ٰiMٻ8n,W:.|0fd ɗ O'"{nRTIH]U}'d'Ti†+ ~ euV>2pnjJlE+J`R/n|Hz^!jz#Z80GfY9b.=W94Nz4G?S||uJ׻2Јf6 Ҋdt ȍ4`#μ% }pֻzɝЮMrX(uc ٲՔсՙ ˻w3o>8?y/њ!9Ѥ.0Zl2p2kd%BPLяgw/Qb]+?i+fo0OЎ&30Vyc,//^-J|K ,|ercE]r7?'w\ezŝ?g_<:isqZEC6tIErOgs ڶ}a!߼] WA=$ }#tym}b¢?.^]x:V%<ׇóvƜ,.ayzizVG'al tw?0Yl^\yI&ڛйw878z~vb˳rf) =N,+դQB;P5N7H)5T39Y5i׻I(|WQ2jpsNˤ&jH'q!imgg«`ww`÷=-.W<{}Yv,Ǜ[KoEWPowf=fQ=SBlזz]cYH2WxLԫiۆ x_"C_1oc ?}c{z| \0XÕQ Id{s1x6! R;E(sH;truSs` aq. %6*fHy ٨h:&W?nsEH}JTkzmLY,1?er7Z@ZH$!γxکn3>sr3#cY+夂! )eC`q$; |GMk/eb :LH%DZuQ$ATjZe?\JJRRnS9#&.2!GwS}2B %ƴgƨr,p}ʊde!0gyx\G */TΦ~zVU=F MRly\6 糰J$ڲLNm~j6|zuiF唀d7I1ђb$Q-qEBqstaɯ$f<6YeΨ+` QPIŢ}H4[?Bؓ_[ ?S 1__t‰\Ĝ/cϧE_ ߴHj+If+,_GKrٝ;R6:?NqZt24rWn{X^)LC/s~%3:71vn{\5zVW8s{oz&7IQwy"nh}^)?q-K_Vw} _:?4i}/ {g\Wڞi"!gNH.f;KF}J҆SITc$0.YLn\WIb<J"3!Jnjo ъ(yRF3% ᩄr,ݣ=ZgS#Gs Gdg{{aw}o趌qW's*#W_7_nx2 hU8!-AHLB݌rΪd9WtkIVaOgqAA+U |z㸘+h}lkY Sۮ-,%i%.- &7 軠{軠}D߰g;Ɓ]\\tw,ϼZWX&K&rò j c6$בֿd׿,AsD;K/rhk'KC,ȡx"Y*L '=ѶtPvF),՚zL~hGP Vp&])fH"2ăOH22Fd!tui+F@y^f姘<$4H W^F53v0Q-8{4kcHȊNX}H TZ\xuzcT ɪ{RoHqԝ-aTg!4,8cdL4b̨&$XIAQO &c^DG8%L$U͌J5.3BJ2ڑ ʅܚ/2m8'<,7ywzz~}:_pF8d2YLIfIh3+-VV!dau+YٸtT=BtPXJntkFT6&f9D"I%5$C:@ Ns %leFg7#^$jjO8_iZ/9Ue^G^ܚ-uRH0F#)VH QFDЉ&( F^\b_Z8ue><0=(bAlk nt t~5 ij;k? f0F<"{&g퇡_gmmz8]``>*h7rX JGztJMoHmcٕe6d$OHp` 0g?{;_.N6I>wdEa ^ͤ$n|c+E$C3&\ꡰ4轨$(үid#ԃ+ C+B+PjFzt%9&E8*p`誠 \PqUҕzC]]]NO[>ϿƗ.yuыZl8&^ORg8u0_W@ls7tOdPV7C95VH "l.ger8=`[%`2z 9o'?2F1B#`/?| xfoYXs#.@~TFfeҲ]r6jɃMI4,6VKl1d:y8-)6o%aoÑ. F QBIA +, \5]܂˯Ҏk+#82= *`C]\CWm3 JFzteh>$* ] ZkNW^]M>v'XlNW캕V?vP/tuhsaÀ8*p- ]ڣ/ > JGzt%@DW)jUVUAi`WIWJLKD)VΈu80=.p M N%_!M#CR1+;C{lx*(a\{te3CZ#\ pЪޯz\{t;Mo%JhKҖ^ZYV};'&&ڧl [JWM,N[{m!s$Qyqu>d׼8f~y>êXPҿ?>2R8܅ ˯w{ـ`q}c$ܹqWy[C#bʺW;[[1׶k,fj. Y\kKek=zGDE /}k8Dzng~5/sј+4Mpɑ%^{@MpB2QxE'AdV?{ײ׍dtx$"zx7xI)E$IIKb+(V'qq-X>׻H#ohoHoFGn9\^T6cgiVnbSs uN^n<ِ!CN!GUzjd]hKޔ.`Zkݻni鱴7;u Σ 4}li:d6wf ;4rFohs!t;u(nv-6=_AR9Chc +T ==,as'җ (cs"1gh1o|mnr15jf<[-DgπfNGgi팀?nm0ٳ#pa\q։4 8T@cV_g˓ qDchLS#%c$D! n@#!L_I?ǻߖ6MTu-xthd+%s-!Z)8<'̍y(+Y6NV,x Vlhs23F̶glKKiK>lن-9l-b2<4rh%qu$~_a͔ch3J8-XX )DLЗɍ*}ek]!z6|MbDsIəi0\ ,sR<r(2` :`ӞN |rIy;aYg #_VU@1rK@9s)򀢳| iqG.q\q(ϛABuX .]K ţ˫mc/m̭i|x'Y7e-M&0Ҷ1Ņ R̉;O-X17~ \"`P2r9LHp(ĘJyK +QJ՞ꇱN_4j*3} 5V$`OoJ+`1wb!ՠPwjG2P(SP6$P C%X!@YPѡ24OqҌD9lEOMWQ z,,x$n:M;&q b j< n-KC%K-Nnģa~"o(V1Z8Q9)j ꠀ:JhrdzGp`\3|ick0ٌ+j hNa8hc[ Wj24AaMJ%<3l 9*oBEc1~]Z-kPiRv|Z[! 1ѫxdPi ΛNMt 0h|Giʝf@8y.dVhJ7a#أ5>dN֞;JOQ!,eJ]1@b= 08X)i̖^j:n4e XN6dj4&EH*8ėԅoq@BpC9TIxD;)2昢13 R V/_W^z}~+v\Bei⁀ ѥiݽCM:PJ/dTsߏ#{ssy]\wN'^w~"gr]X ~mؿ(v:(XH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': NuY煜@ HFH&ڻHPfq,@NV N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@:_'l +9ί\oq-;J@g"F N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':dO[ 1 k'f{'P֨,@9H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': t>N_oGzvxl5Vm7?l/6FwѮ_{ $tJ%iYƸVb6qK>nF.ʸ{s7_bkIqW|FGݧ;y:ϯ>m~;{G>;PfB<?w~z=@.pOwidjBx͟.ng|>DCJ?D0䭠*Lð4 ;!a:(;xguAG9iq8)Ä^G!I60JzډQj.$pZK*hɤ?AiY?fcJlN *t%h;] Joΐ[89 ]ͫЕݟ $(S:CLdVRW2t%p[mӕd}ǡġl('_Nih^NCh_tNtCoyzt2th]m (%3>eG۟.>ݚ|;+b_ۻ۳wۇ}y„71? w/ͷ xqoK:+۝ϴM0Fj\hUhh{i9Z3iT%Е%Zm},UHWrL~!`qW+AKvt%(s)pu}}1ڏٵ?NnoAcqhyxȥσ^fo_6AN 9 wc1xsGk@2"ZJ@go1$!7Ʊ$1" K]wdJ31== puW+AlU2:΂}iIhӕA *y]pv<aMr6@6$(t81ڗ/O/4w_֙PZ/'UTڡ[t/DWd֡+)BW@];]%m7+guӕN ]7nW:K)<:$k|t1v{+W .Dl}ZnUhZо'tϐfpBtl2tE|5g5; Up[L\[mؽΑӣch:$虣Nx)ՉtRFRsԒ<=|c#[3%cVZUcob\ ] \ZY\ϧ+AY *y(/DW8CW2t%h흮09K>u\ΫВӕ 8tOz~m|/}ipQ'u慊Pbz?ron0 g_Mh=J4-:4-psXYw)M!M!+IpІNW@/+F]p+KЕ+A9'~Q+&~ϻd=sOdYA^庎sQʼ$:. .-&hٻ$9JBF'v/}gihiwmˍHW6. w$111ᇍq/v. kԐ5(^Dŋ(H*+VPYY̓'hioށ=x/"?Op+u*ZBϮErܱ7@=* 8*8"m<\-MWΘ :Y s޻=O]pWXԚOu 3TOcGaGhԾr]lxf촏-m쏭 tfrq~.ؠ5p^c<,E-yѩ,hfbB8JJyʔ=TL6 o'Ug!(2B fUcLƗE9@)sBXqNZ@JOxwVE~6 OWv1'3X-h{Dmgwj/)X^h/ Mo'+rM*~˵a0=x06S4n~ҫioKnQඈ/0iA%(}\^@0d!C9 d K-ZFHztt=|wbRދ(~|H9[2!2+VĴ^02<#8Q*hYNz#"ۆ:zf"G:uo)˗#Bz`|Ȗ{+K6PS)XpRc^G^/ʪwz&cM\u;Yȱ!*{Aof%F?:ӂI6iML\~:>'Z4m\Aӥ_|úlãbzi$Ac?Z &1L.2zR d۠UlUQJ a|8;[ߊȾt(RV".?PA.Y=Y%IK)f'fE|4۔va\u1<4Z,ꢋɖ7w8ѐ}|qXoɧUtjqO{ەOhswcNZ5lQj{LzpMyG_Q} ٳ-cṫ`)tϋgu|TZ5?\\(DSwzp֦ztdF_œk$jhe % Fi~H)K(Qu҇ȕ(s h19ir}B0QI!&AlpL:Gt dT+[u<2U K%"<ˀV -a@+$ *USg(c@̤_84q(Se) _E>8|M}QQ}7c\׷*W4zP+"4}GmФgGŎ)RVp2Oe=iX4*Zn3VjcSZrff 16qA7'lF{# ﳵY e+.uJwe1쑺Od6!bd-RZ5!F|XR8(I d\Zm[e=Y! )( *"\}IOR_9΁eR̥BY, e,FA҃. ^_{#XP q8M2y e!mgm-i\n8r|RP\\:DۄψRR@r22FDT0>Fѫ=_ϱMP&38X$A (L%U#~rOIh;{KK>R^w,p 0{RkHL K j)tpE_ ,=⬙܎aBΈP|uin s:Ry3pbmۇ6n&`yֈ)ub|zw=n+L~z.>v֓ velWWnm5-^k:Po}CܩlB5׾]n! 0m nnw9nη lz]+ _= yn{N&;Cl2{X_\GrEn3Osm_5=i6o9h~WnkpmkcPf 6a(~J* VQMw;~@֚O -7O |CjO xIyI_Oʎ>ט8A} F4Yh$jA7vԪ;hKĤ9Hgc".OB7,3BFocYY['2$D%YaH:\8#A"Ld}fA5uW4P|ס]Y Dz:]*clzCGbjrfc-}_n%wr`PQhDP'yBXgdRhBI1ǺEu387tkHj Q@FϭBG#=9UJj'yhc޶B{}d5tNڞWa4+Hh)Yd޳OBBCGa9Fِ/>0"TPRBP:D;hGR?1'ʚ\yiəC^-Zd2RQxDl0\ S6JW]]Zd]rKWtKL.3!IÔIh'x&9>[LEbF-ǀ 21Adu>*f5pC) ~׼Ћ#@Nbem 7A`-X{c-uih)N`uajvr>flr:P?W~xoԴcD͑5g:8(2g')yʜgR%QJcQ XRH+;fr8.z0BmSvAP g 7sR:W,(,$*cUAY0-[/NX9/Ѡh8[ ˍѨ5:iBHe2 1+YjbE~xORK${> &CRؔF.Lr3*#v5u#vFԮfǢv}z{fH%y[P3:@`ƅxTeYce ,:J5 Y4D >#h}9j6t:LX+X>EDQE="PL:reD)|S~|m!s21~*ʍhH\&D.8RРV"Q^(+* +#b5u#K.uVcqTEb;7PN, 0F +D&ZäaƓ[TcK\ VcV#w|ILw R=w"}1Տ+(nl+C_ZQ#ݔFl6 @, k̎UKNI Ƭj 8ΦUDX L%VEJ3FQCrYDM䔕?hB{D.KRYzqJG+W}u䆑q~& B]rp:u@O8~&?̓iiLKR$ alh!@=7?8ߛH֋sfIUq\`,y e36m&Z+r9[?,B>\Lr4BT8Wc浽^^h7;_T i3 \>)'L6I8*% UƥP]j~yqѯw͉7Ł b +N9q<Vخ9Gh)jpiZ2yv-ImmHaf'{bژF>ً4֣|9Ӌ6'ΕB[[V궾jVa7ЦXl8"e;KD%i!t\~}ʚ4NǻڜunȔqߕY,(o߹dzVua;K ~t10m;RO48XK5'}(I*awxE7cttrFPfβ6|Z O2&Ϙk/g%M-a_-ytA6+sdzޖ-3cr0Ή?EkyL3n3nwg[޲ z0B` *fAhgyι*(E/de52< y.jgP!D`9d-Sh8=DVIJxCەO"uѠvY߸f aQЌ}YqGm QEuJüx/!H)):[4klԑaJFXeAz։ 6jNĜ2مso9Q'td2TF0ΣjaSRyox k#iwN9'Q Y1R3I,A#9z7x)yl4XnҼmvlvugrJ9\wC~[ ZJd=5H,x_gx-ƪ(YbN%ҩnMzޱG~(Dwn^M 3cvT嶘\a{A?fA?>2r+5;'Lh0Yd$o#Fl1Ȑzq'sx7nKn`g=f9[O&apf0X4Y- C@_{DZrwO=~;? s+i=B V>ҺRzgVJ ۵e29Q{]qC؁n2On2Onʬxbdp)t\-qTf1/dZs7Y4)&r 2R .37L1v7F&n=3g1K'$jsΆ X ySj -M,bH0³L x&]p"hG\WFy[az%^smF _B̛ oǾALN^ XNE}=a$ɧ ۰q5ʵ>7&QHЬs(?J܇qEG;T=`'zeH4!"*M$3WH Ll{IB $'|ee 08j]jE:q2l[nߛxGy2tKH[RG:d\܈֮(ȁ>hڒeJdƂ ZhM'L"Ma:33pۄw-){]FWz2{pGYnlSMnlg]#غz)̸jv5_^Ke(җsI;ߕ{ %@$ܶ7%-K^M@4XVJ_(6JO MH䜝ٶԯ0T9 ܸP~QtQMk|QP9Ι]1ɘy*4ceIّ~Z[a*ڜx/8}zig2bK$x؟&?!|hH25(j) ,LYpg@׭ v~r?;Z'hY%HCt$-1/<B a&v#0; V9[*`BE>J'U˴N,E-rp5rn \kG腒s/ny['vح;Z{OnS8+"5e$a [0C z4ݧ0&'i2OhGizefybɨ߇_`ľisXJV//y0WJ1\x/o=F˦D?cfvb K\mӪ|Bm*O/?Vf_.#iϧCo{ǁBfoZEQCUs"]E|_Vu>?|c) o"fUpL-WPFsbܰ !Uwg,şjz\}Onjze#v[`;eyE*V¥m;^hl.u`-m_=QD>ޏo|-]4 L~ G/DJlwAl^]˟[,ɯ/~xpn>q!Z@y%:v9{⚷y jzi0R<ˋ˼<>RAˣ6uc:-͚K<|yF%Za_l 35PW~"T U\ԜEQ @[5(6wC'|֧8!:4Yv OSyR{'^okS4p 502! xmE)hvJިM0ޞyRz6L鵹)+#kU39@߼O9t|>׳2e.&t9NB@@! ds/j p ڝh{GD۞<ᮔYڕ鯠^[J5Y.x,׶|K_˱6Vq7͒[积!þ;q hB^#ڋB4̓Nz d?l:/-6i_-9.q'`N=V4VߜwS)ؙdDZZ?KvcK4SjEz"++F)+ZwbJ/G\ipç7N$4͞eYL~_OEM&y}YSE-fY,lrY݋Zgd6?7ScA"=0Bw^5Vr2JnQw%7&U=nGiTf}o$%.z֬G^g9K<da5R"W܇96;eJg0˶nd_Z(ZT"W[BYBB.!Ә0qQ2KȔrKhz+BǡqKED3M\ GS.I\`+\1}+r5r)Hɻb+5Ʉ1wŔΎr5@U +Lǻb\aS+UrŔz j aG8ȳZ뗎gzJZv}ђt,֒Kvű\-HHXdq;~ޒɾSQ(W{ dBhw"1@,ۖ (Iyl^U-i_:2͔N2=@V 䊁m:^%dJ^/rŔF\i0*!bW89\1QZwbʞ͈rJrŋQgjL}QB:*Q KK^ q53Vay4 U>/tZhQD% [gu㚩gN9W͹K*e}rئrq]2/׈B_1%g=$Wt%nNj9[ҊG0Q(W dʡSq.`rс0 t. !bZ.WL)Gz wlz<N<ѽ\6UK܎#-v5lGٷ +jߦGs K钑+*bڎ֞zʕN+\2bܮ]{)jr%,#,NbZe.WL(W+6&ڦdbrF БL" ߫x=L::5U1ONVY9F}vz}oՏdG<{;<}VE7z\&xlG."qi 4eQ;>//s}{`wdUg r3ٌt*S$7ˊWڜƊZd*ܠv9ѷ- 'ܯg]^YJγ]jjw\Z DА}S.].% 58{Ўvj*amw:ۡڲ=uvKmzWP;r(vRq֘YcJ?-IjcK\Dc$TZحKZh][FNRiJ0jThKHa:+1bZ-.WLi(W+-1\&b\R+C?:g7ۇVJ ŸTked6kc4}v7_#s)ū"qu2S LkU?t?^RC\1Hf.i\1ң\ PVxrqӑ+UrŔ0ʢt)M]1n:sLk|(0JaWeh1gMX{<~\tiI {w^]Q4Ykbj-to%(ɿԦl;zS4 |w#Ob⼺)D^.#n|2%cYh(Mh>D Jkg ^*ij-w;I-ץחxow{v޲í^Zmzl-٢4YDjEo^36jlnf٣:")*_P>^1{T˖~O!Aj=aj'ݣ>!.yIo ǕLtM ̪>ī:} BWA(W'ˠ+#ѵqM };/fg~2_" NI;B/Ӫ m#_EL˲}u6W KBზNUBc-j/@aFl4^wea(BUJ_X*P4 6E)b)E-Um4!{j;t*THpeghU-M,KeJ2£/%c B He*DlU!!imU" \WUI$2RcxU(5ʺPual!VZZ'!bKrR]kBHck `U)j]2jY'+P $$se-=PM Y3S". "Tױ-ϞE#J*Rd,-DKe7$rxJH&Qu" k+bE MJPֵu *fp[o I3u S/j~ubu EԜ!-/ RI_m\Q(M%uڨlV^BUʈ2PVk$%1`I jcHjEY j!0z5nd;C%Up1@}TBPϗzZj8$,"J^k-AP (_*UARW)JKN@T[a?Ayb1Ja:Tj0tPK[@=7IY"0l7# a8F7z_zl4bS, ۚ-n$be HbU1Dčs͈tAڳRȮB@&Qw_KͲ#oG0L3R/3ޅyk\4`9[ݫ N6h N@kah:^vN#Xhj?k(QLnB!VˮRWHme±&6V:D\KYـ`64ka7Wg\stA(k)m+Xa=Ҽ1 k9 b9TUAP#{*IUJ'0'U6.H1?'N|*V-tC*D4`Ԭ4 3*0 ne>id, |Der HM$P **ӉPu5%M%$`AϫuJ+j+1w\ 7CA a ȸAC[l[ @rNBB("2m ؍ttg-J%Qʨ[sV ,,x:M;"q b004KPI!0ά :P%@ `-. `I. pqT AP{ TtgB@Q"Ł.0͑㶂EUPf֞5ƒ#%d~GBY'_& Juu2R"l.5UuA"F_R]zFӘy}W@BM6y-d*-%VsMȲZPҰ*4]AՊXZIۅ @l' ݿ\E{ߋq)tǬ䤡bLbb󢐴t8!bEfC<}gⅩ]SxM)=c?ՂkW f=fu&>Hh"X 3 o b9PT8xi;olJ7#+ݕLAG=$]I4T*#d`&SQdv֣2E ) >@E&rZ5ȼ`|ڄLktq;X1Fi(^BdO. O&!z*4ʘ}t6A@F4= R=ŬuԆHT—Pw}CP=n;H HjLEQYf}Jh v% Aڱ", "+P(v5ڢU3ڄ`1btGJ߳yBP΀Hn@Gk,:LMFB}Wٕ P?AjDPqkUwE¬2,,HcF8&gو.BPˉ6BkC;JC:kϢ;i$.qFRl;Zr} rrw:;ݿjIwO^X}Z16zeqr Wzuu $B4k⒮tuvyi֓_'/n_Bn]Nu[ϯ{j $N!zN"L|+ >`?ઈ#=7a@(O]">H't}# N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@d@Ho@N F:*;Кş D(g'!:?b@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NO f$'C5q 8֨;*v(:c<; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@>UanNa @b?NQe9N NGWzP<@Z}4g8'Gv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N:|N?oǤ~z~8sjJ^ooo7 V/oL `ɸ1c\0q h^q (b!0AZ3](+'(tEheX:]ʍS옮2(+ CWCW-eLWHWZhq@FUr mGm(O1{rFoZf#aO>?;/??X]|708c'&)=Ѵs*訛E5=c[ci|qZyO.# _kT qX@c.1XMKdb3śh QpB|~t7vq.~nF':%6w/gz+KSWMMF:EoTBPɶHOx Kx? hGv`ŤΓIM1>ӅREA8-9$W%F3*.zIHh7X$Fy+)Ns>`f˩I'SDq}mT KG@Xq5~GhXJ-!?a `?Zì!?K+Bx?!,FRWaj}P;3]]`ttEpqڏ ݧ4tЕc|nt `Vg ,{*wC4s߁<Շ*?]ཛvosnhe JE|"CWYm O+J,LWHWZ͂QH:MJcI'Yz6wބ`~izt&Q0M M[!IUJk(tEhC\:]q<g٭1aY#wۥ7;~tN#k,{eF qRynhn("@WC^*HtQa:= ]ڏut%B3] ])午[="&BW\Z1t,tUr ,Di 8)t"l n-4b&6֨44iihHE0޻u7BW@k]:]ʥ=Y IJ5] +KD(tEhe\:]JEAҕAaPâ2<]7sq8ˏt => PvcMۼŀWBOMۆ7 m-ktW&]Ww'WOKrq%HԓW{ M~Õ:᫳B/0>/E͇g)R_c74_ޅͷ^Z p}S^z}v=?[ "M*7#ϯ>{r~/ZD8{ݿs_,≱þO,v7kx?T.Nɋ%y_EM"-r)*jmSM(b1ڜEե5{B\@Q6Uy3lń~*)N|k5g " #k6RZRQӤԽs:Y߃hP\иCyo#8܍`/mlpA &Pu@Uϐ)sHjtB?iVWW5W ^v='Lʁ'B}"(G%C8K$InH̊8B+FQqp;4;!cnُ&az}z}~2ܣWiM34tPٟφ0 ;p [_\9 K&Z 3qs-bZ:@1|0 CWA ڥ(Eao' ݼWC5¿ shmfH2[1Zr2@pxkHzz;^f,xf<]]AK;i7nCx,]IqPWBu'_#p;6f3 ]g/h#^SLJֶ48_Å Txt|s}UWtfQ䧣V}gUn!>ݧoy+NopJGhnLցהwJfr'kUw 'O:S`P&#35u;ܺZ&9i]'uf͖ eO(?4O|=Z0Nw[ԣpx̣A/ " #?'v4̚-~;q~Ju.'/&6fғ0y3,,j N;QZSjLE4VmeA/(SZ{,h_>awww6zj=(7Z% |`s!8È`{ZdgC̃p:PX{zѰӊbLeI C jH/T!&uj1G~(i_Bt%m͖{WۑN-ɟq [LctmAL C UYƻGN8 ABUI|LK3qL)dV d|&NbBp%+V p&D1qPa)D%Q1ڬ˛R?jz  yaUZgc*g }*R**:@cbʀld3 ՚1wT)WX_߁=DX:G"1EJ3E2a^dŔ"/7s+O~ARs */Srb'nܾ)槩/,X kvu @9B/y/,Z%%Sv/_:y6QFe^; B(4$+b4iùhkWzLɄҨP|(*~9h2zOThG<2i3pJKb"!:H4x(dXbKWTQAmA~B He24@JY^lU^F51cgF3abm ԒbrQJ,YET+{5 ϕ{z% ᜸U4'kP#P-ÀT@c(! !$hdAɋ1p`:&YtMJ¡>*T Hoabde (¶ОmMFƃ,u7lxɻ}wQ3[l4Uǔ) D6]>xeC9)D },j9ƸHr"#{Y6\DF0#&&,RD6-v1v[l?ĵTv18jӒV{ElG}p h3Ir#q,F(rH`Y4RxVţ,MRiT`Ȑ & 3;gh1H@Юt K1v6l6 XbqEd-";[ijE%jG [Te*X (]Jt`>jvY.;\g1)9.v.ܐBX΢נ% E4g*TB 2)kX$0ka%.u@'j,ԓ:H 9׌3e EAjD.~ׄ%B`󂘄\ip,/g9PZ߁:}KPoK@"udM8U(M %V`V$ Tߋ/ v3˿X0 ,:]R鐜PUJ,`#5&}ZVq9NZ)i/#KKo!OD+)bE|pĭ4F2CmpٳpzC="ɔQ#1[vχ!n+EQPibZF"%u CS"T Fx)=AH8k[v3vw>5JDH㴬eL48`MO: R!0NƤc@ӣX"p:<:FDg(+0vRIȱ k#&A)CipblRMVD|(ץ(-u(˂sg*61{*6y9ϮL9[[jo_O G<`SݍfҺ݅8\V ~$+c&sZEZZm0VȣRFu:%*EKZynE60z\v[7ZyUvQu7I{\vw"8颪6?>cgɇn2·땸-`~vhwq&ZFgXƲQi솷~;i}}쾶' 1J`'*=wo >D!_ z|Ƌ<x\; (UuW{%$b?@!s Z:BՅJ"XCJQ{p"ɄdQ: 9Q9ý@D.Ε=h୍D^AdTS9o| K? ҆DkˣZ zע s?U90%m9gBG >Q-e/)_ Ģ Ai*K-ZIG>*ߋnNlNy%v`G&A$B AWCJIdpiԚ:\h8Q89G+IZMQ[%f9po5=ZC]1}?y, :'U $a8FM1*Di\IrH5xʐ=:;_uPMw3䶗mF* )mna‰d jJTPpb{5${>&Н,3t<>-e$# 88Zך#zI Ē%w/7p5Q#L)->G\FJM.+ 6opa"U7a8ĎR%w<῜]hx\Oi(j>ubpW׮5"m}m=yy9_]o7k.oBתT߲kԊC6%G{ݢ>mQXV8]oIW{؎]dష3]mnn zƶ<<~V˶ɯV]d}d'g"Uϫvţ.\͚O+,OtكwTܐʳA*քz] vd;>h^Axn-<2[u4 IG)2URo"؀,z * Q;)q(f:ذ2y]Y^-v0Pr]Q&BIg BR#!5^tUۘNsj7flUc~vjGi;.c`~mGO <+%6gXMeL9%(Y~rOݪT!%dW/f q*&(#/Қ <É/oc_gT&])'JE*>Dx1(bL"^Fa5cDrQ , A"脶r b \ қ8U1`ʂ1JU9T]VₑFDŽ`*Ď& uҡ٠ 2*.e^q1lJ3pGk.E =Ҡr%&*JԴž'"gC#U]-U;ݨ'c_dh$w3pm10Re`£M}o^ye^Ogy_|v u'{['~Sr:6Q\Ww1$ ۤU;v}mR^rzJf]&$Z=k'5q.c/_={=S -o_ XBky& ]m?v9]\U*ﻢbچ]=׋ɫ:Y^EזeuՇZ,_LoM0dBl&|My\G3CKQӚMӣu|8j珊OZ;h{X3@̬R5ij\I0;iȀU*b`Қ4}5D H ^P2H{Vll$\vE)6P AJ.]HYAJJd#CˑQ"$R1z_y&ճڛ8wl5 =pեf)Wi4v 1 շKa|!-p$Z~a- ''_h(lG'J Z(G6_~=Bh )A7Q8) ul}}D^zJ#ъoݜxÖ9cRsnǡeG (7aǬtb ($_H9T`hr@R" g ͉TN-qddqTє.EFBs2*JC2BZB%5[ /]$dᒀ {Fsy lO$Hݴ&lo)Sj9Oۑ߻ɗ=B>GȚK!z>9m3((.cbwUvJ؋x:Ʃ Hx2_N^וg4@Us[ueY t/e]*`wCsU2G[D#MhρB5MlrݍX<&qvg /I C"THzV%bEQ)eZd-~ֽ$3#<*PXH2e)( L:Ked%{Fs7J]4 aANK=k<"(b,.FT rZd#R"s$ѡZ WFH GSNz]a,R`mN Vڪ iԬoހPorK\*"X1I*tT $bɂ!UɁl>D=_?֫d}6AxɎ0pF$.%b|jI)Gmul@:6pn-1rk}ڀ.i7piaP)8cL2`<RE5}5 #FlA%<,_yD cXygM~DžOqT:B=A*1/]`^q(̋UZCg^d)4#ȼN+;ZUhz~QE6srfҽ5 @7&U#FD48{Q O] OV bC 4ri'J%io}[YqFQ“2s:lJ6D/KF'#&Ǜ-Ɂ灏ݣﲚtkNɅAUT4!eJ7&ͧjRK AT48x2$oO|bص|Z Oi9mIgh@PqJ2A`RemX1‌.*i|*ڑ6oژ֦)[̽$" w;'xԭoA͵9ՊN@Ya0BH`crB(AV]D-)ONQ{QEx>ǽu>~pE?ຉ-nb8](;]dx<]\?/Ф#QaI%i]hAvtF 2Hø`4v,;cir$a' s]ۢ f db4coc_}qI Y$j~&L 20J0f͑a &8J4 {]^מ6ce!0,>{M K)qKrZ1E.:ȶl+8۹nco&Nj<?e,)F^bҡ)+ɩ rV-%#I%.{wP\0 *lL1ʗ͎ʒ!3%F%GY 񣗎g1!YsnOaH>G^nZG~xoY^vwEJg&NV¶%cbaա> oʁn^c5 G\h饓R2W,8s?{gG" ri-yxC d3A.r,*[#<3 =ir=VImw][ϩ*}FW=+PFNPW8Hٕ8R( ,>RJW]l>w58G+;CWvS:K +K фQtx])žM芜O +r0 +q])mK(NPW휹^-F _XV( Zۥ`GҴFӊ+dJ5 Jyd4 ": 84ț*ϢeCfR`W]I]I4> vɒQc(ܠe^']IA O_2RBN\=3yg:ZzJ280nÜpSZZtu}rҕqN)nQtZv>~>pJ8l8K(,])mX|vѭftEz:7!|;|ZW9#GEHOGi +ZuԮbt> + i].]WJܪ+'atގ+=V18Rd ٽUty3.,pYeWJJ)Ůfti1 ؓ?#gWh8R(m7CWnSa$]8 + v])mtKhi)ʙw-iMز=E'lsڝ4 0Vc_E+&.]JIa j-tGDLat/8J)Sԕ@R89;Mi]+\-Dхo2]OY- kn3š7hɴ ߠ. V4`6.(Ů))B/;0R JiO)eUW'HHwػ87Qti'@k *Q` %7u~I?8csџM;Wsm;T_sv~Dw~moWxoA_]l+\izUYw{/}v5{ɶ?WzSm.WC#wo6Jھ}]k %{կM{x 2އ~V~|:=ⓣbwwOuGCW|j>|qp/ڳ)7>kYo'fՇwīU :FPQøH>Pòg.3p E ci|3g:=:3+4tב=[wlB 7}zSn~s~o^fT%LXCnQnmbCI)l ~ 1Pmv(w|7HC|uqyVᄎkf\o._{\΁l1N%q*nSRDm6ޱ SvI[D%;RChsuf\n+F9^,ObZq~ZhXɅKLC@>dSA.Y >SLR*ڜrPpHZ=hb6KH[B3Zɺ<\N)nwK@o^z)վX&k+L^$hֻLw܍R˜Dh9{n9<3q" ьs5$W*MS+rjzLS-9 M~h`4-RmOmKYA'qRީ!: eL*`?ɟAGU6MQ&,mt? >ixhl<3G]d?QگOUM6QbwhT:Dy%S>R8HHe=xoE:āZO⽅LTRmm3$7%[2[g |c{)҇5Y?zms38uLI?Xa仌hJKO!aM(VÔ"4ǂ!rp,Z'R(Mdx΋"&9ڊcd&`<9iZ1 "r k:;h[٥;aN0LgK& ˂"*}sH S`n1Hԑَ> iD„oM328' L3<<Ī%4=ɠ-8['F2Bck2.Zb nSփ@3.Zh螝MS@_7agiFˮ `T\u^Xac6`6lm*z,*" w˽!,U.}A"V䐰3#\ j[6\AƔ"5T \k"^6b 2MW &6אM5WN^X"ѻ2!]y| s3R5doba wd(SPP>rAYJLdѾD&a)0Jڂy>:QAZ t/,,<:p0vLL?KJLRk ٙ-HFUag 3K _נJoH:jonL,d sLʘ¢A(Ȳ;!J1=dC?aBYg!()SU(ELܫ.1NhA.$%RD^[ȺT09cIF?U3RH_.8ֽ)GࠄrtH،pcXv1)Q"CEՊ).1ԙ1O z/tG .M3IϘM099|Z1f(rbvN֢_#0&WnnFOknK}| &ZŨ了`IEn3ka$q{F2T,^#N8MJAJWshSBMʄ4j kPĀ.`N^P|AaQBE򠷒J$JE91f^e(q,Ľ#`tKaYG(^b d+Fn]"ŝ036H[N"1kwt&dQi.pӟ_¿Ѹ'k7K,)W` FbE^ i,#zK+$ ҋ9(Pb2RS@݅ZT GT'5H)"hc2d %|Ar[䌊hW1;6恈:ٻ6%W~Jp3R &ˢ үSC[%kڬ8H,rftTUwnABe+_v5h4f bʶ)naҷyAP΀H&eG֥ c~;XxyP@0S'#E%htUL?ƒFVrw7qVZ;"PAðJي$>0HYiD+BuDkm*h͗m]ـ#M7w4C=l(I#k4ZIP2*Q-[Tjڴ[-^Exn2MwQH6oAK0A CnjK mrv㠝yyZ'y:+ep$Y:4`0֣[nJ''f1zjؚb0߁P$bj۠[Sh5(U[%$Ie#FCo 1i,GFз|0#IZA hkF8 8K*C)]\c$*װ4LC` Tલ5ti/l9TeX Ո&i' !#)!:m.X:@ m(O0]ghuEBޙ:8F( N} P R/*弝_rU *5H}j xW)0hUQp]r黗/{RVh40iB[h$m _j L^Qm>1|R &zd6YiW*?6|vjUCqZ7`@ׅg&Kg/I'-}h.OgOVYt7m>7ɇtv6vm=_NJ Ԣ[[P(3\`0ތ`GA(Wl@@@@i<+Q Dd%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J Vp@ihVJ g(Ws&К N@pְ@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X tJQ`QL #W\ /#+P r>@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X tJ %=QFJ M6J M.J R] D(U`%!*4@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X t8JFĿ=z9R_\.]5MXO"m9 8|K9h.,\:WCZs dںOQ 6Y`7<ɪ SagVτC?4/W"R@=ndWeLKz]NȻD>R؊͇iBiP4}4m 2+&"6"nQ% U"]Yc9WfZkNWyC+|ϟ_uY}Hu9Ŝyay}Zͳv8& ?N0T\lz=~5 \7Z_}V4xSU5ͽ1 VkˋG@׭Q쭲|^&i={e?Ğ6HgSֺ,Ln JJR[W ~>n{!7koeUEU-"zk*lI&|Zӆn8.B/&$$j!!C Aʙ Pg}Ў}PAUFi]+BBWVԒ*ZtNv6">7v"1]}7tvzw$Z?1] k0^> ] CFv@WC/{j+D6tah;]ʞlpJ!9財n0 ]ZNW2+-Ea)UH7jL#(eVԀv{^NI0{OTIpZ0 }TI(e>@6iSTI` ]ugvdZNWj+-?Ng0M]fyyZ-:΋ˏt()5WMzQ"Ҙ26ò0<}],oSDjSЅI-jM6Yo)5j{Qmc褓-L%bQ2Ό*}a((#Sلz ѽ?w?4ϯ^{A#6u,^]ґKA_F'cdB_R)HF4mPlVB؃VB9h=+Wy_WzWjCm_Mο:f_}<]L_ɯ?S-m;1vZJjƫ?]}x񟏪z}eq?])ƒm겠86Y| cokO˯r."f) Bk}}N`Z`$:e(BkPZ%'1$R2.OZ;]\ ]Z'NW3]"]dDW8泌 F\ЪOʱ tM*JgUNtE]> ٬NWw5Utvp$d j`t5 {dpZDt5 eaa]Tj Ck"zah갍a(c`:@R7ˈq."Ou0A2] ]i\/AL{t8t86n hn{糡iU.4 84}84mgDW< ]\kr+B;PtutˡVGsJ&+3~)ߑvFL [-M+b6 |ؗ+x\Ѡ px\lt C,-eޮ&X䷒f @DO'+oAwD8<Ǣ pGwvg6Oo.x^|uTӽkc@ {wPEYu=0_.gggS|+߉=yd{N縞~m="ML=TA19=|L䷝ɛt2:eÆPR< -*t3ж/[J!w۬wɮgJۘt*(mnx!`hf1m:g*pncBicS}K ݜ P-4;D"FP44r˱S1mm1ᖳ[DFʢ*<: ]T֏|~s4.vuܼUFqD MOxflU?UK BbVuY@3tݤ"*R&տbwgܨ ;#n>F^kP[ \2(m9mMP8͛X{猁yʔ[vxC^m%5JۮR)RAìyYEV{"4wpjޜ.W!T1l Ԩ;5MZ"Nec@ l>Q{e5xjUM+vơŎ"Y #+  ~L z{edpWe1jybH❙&A^nyASB}ݓ=kHRWW#>iUFڻǪLoarI%o9ب6y36QyvTU)@:+ՏEGݖ!k!`scKҷ |Tpݓ=Mo[ ZNKn٠jEmRS2IV o\mٴNhRTHA\+xSEMiw?{wc0`c@H]LjmZjYiUeZ3Uok.+b޶)pzSߺ n{{^|s_.?NjR˝)髡;Dd*\R3#fRy5gڽtڈe>n`zz՘78:aܟA|\qFGۥg cj44Ϩ̺#|\pۧwo6lӌF|fT;rrq(kиFrĥR/ef Sl=|@i,cLQ57:É` Ou%WhSGj1}&TvIԾiپ9=`o Elw= #cHR<^-wJz˽K&8<)G^s>0\;-l+ 8&>^*Ws+]X [&YYVprY B8 nx.JI?ɇh gymv/=C|ۇ\~SRJwK~яi6VAiZG2Ѹ$}AD.«{Ib1OB:sb/L@H4,>ųd6]_W-LZit1ʾ~lsk;9+aDxi}9Y9ANJEOG_VTLq6U[4D, Uxʂ(#P-A0bS)Oe6ׇfM7ohdc٭zK1p?JpЌp !ȚcJA|7 VDx&*[eq}ԎGb*2uW/KQXϲ,EyƼ5YfU&| 8q)ȎOH<\ճ%\KrtP^ 1jc={T9;UNӕ=*j>^G4hB.|E, Q&#zxfbMCmd#GZ;co;(œR& J͒<$4Mh$LeTwF1F9X@.;1˥emmG=udrxny7OoXO J.p_nt[*\_(.l4-/GSI0?v6At B-,a4  NS8qY({#UߟCx(qH@1CY1@HB22DR3 D7ĉ,1<!(`Ι-}l"` R1ŰNz0+[^@%eqfǞ g$.^LDni^xnŻɝާ&;I,tn5,L?haH@E-'j`Bn7]'vRIzvv1u`e IO[I/?l&IA@V{ 01=;.LcwD ;6tv؉X..H@Ϝ%⌦axSELb< gBD!\Yu4B)C,θlYYYR?VY"ό:|˺_V0v,f͟,J.B1CD8e(%csvsy[ѳ}&ZN>!"LV!+JQ}V'}ʧ;>e1E1j@ղS*vNY,!g~;gYgLMI;"݅NK ]_\,Q_?ZQS}j!PSˢd}U]~'vJ4u]~jYI[=Ž{`pjYhp XθqV6, ܾ6Բ J{1mk"tIN%}g'}VJ9jƭY-Km 0OUuǷsSr$>E՞*? ]>9ZLsV{3b˒UNN\}beE=r`֙4eqX>L(Y_]~$hL-(3(l_"+~6Q]'l$Н ,K[DLt9u%}h`YV!/i?d3;ᔽ.1~+徯{oB|d\^|UNRD_rA ޝ* a2sq.N`9Csv:F[б`#S͈DX!WIU C0Q&E21 ;F;-XXJ}޽ r9C9<;z8iScȌDa5Z;.^zÙÔ 3c2B!~[C4)}diZ-~$e@,0(P/{x=T;wעj`rwHB_>_i3/CΤ#, q6d:ž$`.{(g~)t|T+D9nfPQ EAۇ4^DixVuMQwK8`IGpBJi  0~:JNR=ipeչ-rŭ մX*+> EƘ4`(.4P ʔcG4 JgTFd<2/LT%-,a|z `>He6fzs,FrLF;ǃ=!6q|@*wqT-Fz嫋bix3nWZnĶN] nDïWWK9l'ޤ;# \ NZs5.}m:`)QJ;Ë.eg */Pz9v"N-XKtݟ2ex,"~z~OJnVt>")'T TcJأt.Baqf \΍ޮU%YZ:2UX#IAev(Gkg8^Z ;YP|CcQ#N :?}ǛTX9?=h}lQHQĵɋkKduD{d>g6ucb S}iۉdɒ컮td KVFNbّ槁=X_ԽkĊ{R|Mrcy|`0jCgwg}\YG#-F\D?ܦsшb2,tU'Sٹ(>U0E{I*nDÖ?e/a>Y59KuaiQi.Չ"V|_;y>_7\4Wŀ OAq- 4[-dX5eA4SShSۆf uY.nPK_X40HJr9A @ S 4,.y'=]*u=\sfkju&HC Ұ ;HsӺSgXv%HÖʤf \8+7s^CvGʘFT(cSX3aﱩC_řWOqow]Q-fһ(=[y 駖o?> D]'p (Jp&\!]BV*H'AEhMK}6L0np qqv$,ig"1}wM?L#bij7Gv  CMAԸLh ˽~~湾*/r &TaX\zCwza۷%mꈄڧT|+bh 뭋xLʈ" > ""z]VNIR_Fj'B86Joѷ…jrɟVRϋpBEӄۚo75^T3ܸ<秺(#Ȯ\f,2 .Ⱥ#@y`VfVQlr^LfJL:'^Tšy|xHRoLld‰IF9˪ Leʛ?<"Gb~SO.U|Rn>S]nUOe,Yp3/ƍ7bHijU BDJ)Lr/V!I.D19s׸_Za6"8HψPR>m”?݋݌O҇z @ 9gbbK%$ ^<3sﰕXلVȗŁPgB4$@e$5&1urBmCE)M&)oQhuX&9lPTubRD^pW(BCRi:7Ҳ2I6u[hU8k7{k{2~$tO9Dze`j#,Q@.(PRC9YleËPF`q~*S[JذoYD*W_0P[ْKj͆ߪ`Z7gڷ6yl5S8IQ|GW O's`s_^p%/+x!E2J]R֩>)wpą~BЦ:Tc^h1ɭe|U-DaX.=U5GWR !&"|`"_J+B@8hHU֢s@/s pa酀BH-4EYC7=Jrf%$HL;!RBu )_f<±=Ջ{Nsn47W9RB5rs ENj\a)%U ל\DA;Ļ{{{!abz7}DhBT9 +mz\TOug3 K.Tc\\#Peě+p2Eoш%$@5*EKыIŢ3x1+ݼP,\FGz^' +cYpseS/:$qq.^u$?ZM\;?\OҡA?pC2H=IώwrA@0FʅHdo]998UN(ny{eʃ-P !S|Ek(|;Zk9P0/HX;j(A.(u$dŞ(YArY/JK2=eCD긨~>|ܫV>NQ|@*jqq~ci:TPͻ)E_ZN K*:zιl? &U%bJ¨I+ZhsQQ]by1&4g+U|o^`hM|O*ɗLKY'Pym QUa6j]Ko8+Z,1Fw/z1zW4$RL|؎ܤiljlKeJ#P@]xD}<}~UF䀯Zm CTŨ<RŜ/i΁Qٯ]w|3l5Qˬ`.zFh"ԥaԂ~y}c◭Vb+\ 2PeB/Ar$(utɬX[/ݍŜݽD\/O_6oߚ O|ZO\yx͎{0f tA.,j0a4-/ e4 XCT0 L2ZŭHY&ꀺ} ڕ]S|AhX ݯ\xo r;xn_tm!au@GJܸUr"+PX9%Ƞ ˸a4W.!V5Su03^cֹK=I=C{ ǘ(8"^9G(µeΉ'S4D%)xdB9+dbl^"1jEqK3!@ ^ܘ‰-S0!Q"<ضYJȆad 5ˎ>#BB{)0̩> ,U19*PHkeGiFj;Ne'(gZ;ź Q&Lqq Q<\&zR;6)2p Q˒LHK8A\WCc\wPطz\hک2zbh@Ziv :)R* By>I jh3JszCo-2Nem gĠP {xMxp^gbj2hyzs$NyBr7i1\ ާɐc B,1Diȸ#eV-` ![ !O]ӧekaS2:SZK@QSCA O75WVPjR`nؒ` ! T.gV}kh|` U] ~;O(-bA3:(9A͈򼭞=i9Hʮ:)()o22TJmFMG"9~t.T2ZXglf3;K&-@%Nʌ&Zj+6[3`j$VHƠ#5ngI' 5J!bZAjpLZO]ձ5W^xb ~) U|0Uo8">W=<䮫q~;%wvGkiV'ѩHԍDh$ZJd?(_+30]*H=rސ:&f龱$0&ktBkn#ZiNkMG" Wn)N xX^}' BWCsS4`{ j)|vnhk'7b;`U[ᮛ$[l|0K .!!a1|a-[pGM38=.C<(z6i2B0KF%#tccNv=|Of3N)JQ;O맘Q];,ͼmBI7UiGӿns[0%)ՊiL7F=þ`[wŻ_f},gf7bSܭʙ;wp8 &пY[fzЙ1!iv u1;;L ywA KM{;Nfǒm-5*iR@j[S1<ܙO;; ?xz'ʓ}El`p7k\-|L~ H |:-!5==4,bcBk:la?99SQQ]#8Z<p})HR A1BLN;ؠggvyl<9=pl: IWk/RĎT9 1 |]Q^dTiW5x{".T2ð0ݿnfk<,g(f_1<,sɎ)R: 6}El';jiY2]<\.ʕL10sCP.Z'b;ABqYuZ/ƨ &Z(&Ѱ;iFK* ks5y6< 0g=⮂:YZGYZgSs a/QJoI\-5A!չkQ @p {g疧`$KT;H|d{+@aHq?Ĉӈ)|P6,MX JOׅ+msZL}"Sw#E6A5]fDiY N23/q>j^XPl"v^o41NJ١\jي3cQ8kwҷWc d'#rz8% $  D9eR@1̹;3gWJtvRI:rʄ3"&rѥ4 oF&TSfcPUwm:X۸Oy^Мܡ}}/e9WFzxi='XV>#gz7s험'ÓӻK@qjZjGK9hćb=/:H_Ν2FQO3ڌ[;MO: 9,DcKo.{N?g=N+n;‚*d'T#`i?wH᚛̦+2lSHV$\d4C54d.d%=~Zb!`o}fw L=T '=ߤ6-:I1 ^c0 Zj&O[C},߁T5l*-aB5:4B OTh ؑt5i1 b-mA02e@4mu_CtPSlF3#.G*2{QS™im g2ֽۜ)(*_M*N \ jd1(![@7-Z@C$Yܐ+oAf 398,Gx `)V %ܼ!llÇFqXl]f>Kx{Le_qpd{Aɷ_H2b5u8}!ϝլ Pb,rhs-!/jM"5͇z:q0fg^GJfP8*w)$9Qׄu.OU6hk@D hdA&YVAZdXd'DPǁH%Jb{P/@pC}]QFT묩tD|: "9vopR4S R; ѹOJ]pp5!P[Ni C96g0jhmQ4޹ 6=B0(>zjQ,V:3W J89|ȤCַ]dQqݫq`! Ξ[]LOs=$g󕯜+Qi 0 0w+>VZS&o_s8Dr" &(|ۈI%,eI8(Kb0CExN&muKYDj'Rvٷw텆ZT.?f"R4wxxnT!L:̺ UEKxkgtA cN[lA@lT&b/Lَ-!GŒPb&C9Xv,`1`;f- XC椢LޠemZHߗ"m"RRҢOHa {3R{Zvqz9KZIiCN[kLA%Bh?(OX\lVn/?s<3 I'ӵ1eWgڷ_IU;q*%O=ufB\(:(2[!ͼ7ZlMihM,&\jTC rBBJT:zX؊Ga[ T`(2O BO钬+3*t;7T8Eب?t`d3YLy/NjE<> ƠGk~$"^ߪV(ާq.lUz~_yW'#Sz 3)whn&A8[;?kl\$ {6?br'*|{EmiVCm4ؕukQx=n2Sह^Yb4|MLRTDnEBA1["ZCcS_#@[iqjyVŐ8f {X唉d(NGMkhnp)I<`pw< #Z=Uݞ@}*5ȋVZw3AR-nzMwY`Q^udoy`\ǟ$Dٮ΃t_kFXNV6+(`P7zL2Um˟&-Fݘ_I"zsą%.Qb{0 eg߱ItIbGJ铸#6voĪ"OXkQ0Q8N O]Z-Vi 5F$_u{+,;fmU|kkɡ!Si "y#jhܠgB:U^/H0auKR<H8vxCrzo:0h_76c,qX1J&fVlj&Y_~_-$ԗdzZo"8yV^ V1NR"8P@-g 22 ""4NSK`\Ϲ9~39h&ۘ#D4IS1M5Grp!-aQ b.Q1j*TTJ*"> Dވ1 K(ފi8 HsK/sd1H3:T^g5dmJYv2a0BZ{hC6<sc9Ƞͽ2Nc HQ]mn"J3j漌TJRMl2r*ͫMx^n~Cش:"747i⬵u32lDYCctT 8?{8`X@a1Nc}XԃˎXJ%U~I;e[)SIUy<;tԱr{CH>nK|sq\ʤGeV$M϶˻?^|~,ܕ/toei@5XZf*;Lе`n1bf ׃Ô簅@DR2Ϫ H,^$X,'__eR.WIٜz&8]-d52TX PF|ƻ x)(-iÙ\0NSqbJ hfFIZ,)nj΃NnD. cɵls ޲Cr(I8]k^(yqF bՀ'&E]Q23-7GHKD j^JǖBRQmjN"%ES9B*\>-{RȰ%Ml %?\144( N1)1%r c1c8tU9pZ-VDH ^뵷^{o~ME)(M~R)r|?-jV]\R\l&|hYy=%Ʀrt`jW)T6ALiG@'d~-]~PjY<)vy.9Z}a) Yr ŪCЏ_6zL k%nZhܢ=+bzIĪ@;a(XjDY9 wNQs:< okDQ@ <58XjK?,NV[-Pմѷh:`z^eQuP`@FR)V=91DP YFVQBDs >Qh2ReA:MweU<֫k2Ks M1V]<{-4`x]=AVLڥ=Hi-Owb4d3cV !W<?hm/_f`H&>v U+ptY+Rh63AE@ c3jcZGiqw(|{W~MlR6vߞAۏ pטv,}~/ S0Dj,膃՞Wzd>ߴ1pe6rRoj<~C&j8Y&trBmS^BWa=`n +Xr9@QAT1̽Lf4kTByKhp4IS)()-6+dJE x 'hu2F?eHSbci yrʆw6&.0_b%o5ͧdǧL˴a6_+r+6dؓ:VuEAAɃ<L4JSaZ9?w\3sKߺ.hP|}|*`}؊ 'XIV؁ œC@nFCxy)9Pc_"Bz5hc?~lDR ڛ,XkЫŢrjB;"zoRƝm[JOgu*y St~3 .%=KH-#=NiKE 끉6HBlml Evz[~,HJQ}xAX"h/I%d$~852TT$S3FN1'|W=sѬ0OTղδNӢGI>i"rJoМ].$+7}K4ft>?fUFUɬowxOg_VYx^%l>gib0\M)TL<ܿ+}/K jDPGwI˲.Sժ7<u8@݌i5g81uzX8g{vnDL^I| ;k}ը6 }ٕGܸ`)iB%YPZ(QL++-Vu^i*oQ  F.p8X^)u-8Rad+ ,smY'=Ʊ 6wr1*%=||){زV yJV~9Tt;}gf+ahI-~[9JF:LÑΥg^;kHAJ VV]*z盟 }}n+æ߷[(*Eജ BNDhb¼3N$. jBrD]nO{U$jnaǦr|Cz'dJγL<:=Rk© N(18 A-ZYA"hǽS))h;Hi~K]O3L}Hl/'} jaΝwYJdQ!A,D8 -4MȚDIJV$dQdWeWfSbs^lZ?{F,ۼ_Co] 0woe+e[RtPYbVW@GSŏY:;hDiq 2`ĘWZ۽h}//^;4׷P#v7݆O zD*<#6li]wI|K2a+FF{A835u k:sg\2*E=e)gBPU>P4zeL|uz!ʫXF~1E~>&(~Mz4 DVOvA ~`p&dyGcރ CLiR%wãjE\҇1+:7UT '35˅`9vcE\ؔS7?6?1EVOn mxR|QW 2^~׫Ho-_r"48xF\|')!T^߸Ԝ`D"& քU:Lk+1\.lXd ^J ξpWloۈHU+^ބ!cvzq&aҫOnҟMP~HOV"'KKLJ}u>gg}LYSvǔϧW(\ -|%uUYjF 8 h0%Ɖom~^헷_o'sX[tX$Ђ1tv.?/&sa)I\Iߖ+4jBo˳4t~LZE.\ ‚qVY{2v\) l713Y߯9Nq57_U** 3Pϑ d 5ȸ_d:3npp` z>v=0kIcqJ< `ôZm!`MD4 kJUPT®_RpYB4ܲKˍ-dYfaeP3*S[ $KR߃S\( @KU]Y;O 'O҉g7a&R G)b,FbRO&e<$u `#z; Y;ѩ 8yRʍbd bSF`‡8t$8[ 9g(ue_ՃGuyy5+,⇯Ng݃iLet{NEynS'd$eHbV-\v|-!]uO:8M^L>)vcֳ LLz"GXpdjm=z#z޾,GuPOh{,$Gr(+T)E+s[ Z^,w9cY[v&vf.z\ 2r AT>5lT;Lpגmg(` m8רB0N(BzO\>ui~4/m0־y | .]y0xeGC֣xUo6̻>M+lB2f#0̪z".^c)jvuGseH)FjV rNrdvqz`Ea 2D6`6q w`aeqzɄXF>Κ2lymi+<:O}Hqް x)eL;$=u#jC)V9X2vsŧ*dI8K9[5v>{Ujۗ| EŽF>EO_ 9P4p(.n.tԇ/8^ hqNg㋋Hן/.кOr2EQ[%b<8 Lb"~=⼖2H/s`.NUA9Қ=۹ !/ڂGrnQw 8νg<>\夊m`p }$ŶQWdOs{"C^ݧ&r6ϫaWDӗ{otO}؂pZ}J`-V/\c?Zw`\*MƩ oz^Trsjaԓ,/˴^>ϖӋ T"LP1Rɟ0˻"dFJ'?"hr{NՁigb}[+doΟ3w cn4$>MR?s(*ÚLg" xU5{8q~zbhz }ׂL R[7`w֨'6Z Vo)횤a=͝Gm)v7|o޾ F#AKx$2!*p:aoM"o'9J߼K LFw)ᥬ, `KP:YdSkAzcRļWfMMێh&ŒQ‚I1AM(M g.ƊY*Kת Dh 綑؜Ȝ9ٕD,sv`uh&ȯW~[+)IBp!&*ⲾnJ8bǢ=!1Td3z6/Tc0yHLJcmVT^-7kӈ2@- #*]UiNZmy4du#w톍)c"̾qEf do=!&Jc[g`1v,l+mS"_c[c!3.>NgEzC)w*$-#l*`G&E1тg1CÎタKK1x ]œ&1pQVX$䮮% YzہTd5o/ RP}x[$5o9|}0I*.f8͙2D1?!,դf$A/}p(xcA95{ 7a{+q1ݢ2{ޞ뿉1wM j˘-W:^2z}k5ɣAc*S 8r-`uONP¶ HRǞ&9iJ7K\ #(* 䱏IWHLuI&=cDycr$+jmBrQ\lB1 THy+hبP{q Ńr|OQ6 yX/Դ?Hu]xi[۴QX.g{r V6Moc(ҎDɻz(d߼ՓR_x%F|9 vy6L0U7"zEOiœC}(-:O\%m?OcLA' X_$ã;zn Zߡn~f+pc:2v>}|nv:Mݬf0Bx/)Rdzei]#i/HA_??8jʲZIoAaNnm9 @X63ۗ"^]ij_ IZN:.i)&XP2/tyAhhFKE`w!h H_U5h:T+l@e".6sq$yttҲ|7Hh'x|q04ﭸJ}.d%.Bca@V8G}dN>\NDea; g_]jBnxxۢ/IfGdN9֓8 S#4TlO`躪DC1RTX#I1c lQ%TMZs.&EHV "}rtmI5oO#o7Մ4+͔Wg<xz^#ïLGgv Xr ;ԒuЙ]gvC4 hV oOG7 *3(c&Geh9d吉C&Vs&9i 6,5'_yOf]M9B%R82F[!yo[D+(d6%}] }ekd{%e%}LƤ. R &0cd=/LQg]z iTMAlЯ:1(Tsbch*υ6%#s1Uu%x[v&(g؛?}:gWswP ߛX!Z]?Gwo-{DK-Gܳ3ݼv'et/_:X(|I4:9K(S(e-S񖳨6mRC.Hַdʊ)ՒU5uF-72`.f,PлȚBN̡5wꚜKQm`|΀,%a+2LuYj#xl%UbVֻ8N`I_auyf^J5+RD!0τ62X]yly's!&_% Y1a 飵-2H'j5<*VB'@ܨr:W> S>z>b P;~&dê~BbqБ,~u2y:~dڠ^=m ȴ[RDCV'ӝL? }m`?\On!6Zk6aJjC=oN^8c%2^1CGhkݩ]vC՘btW3@:A&e!р!*ȄΫlR AgKU\VoIe'Q/=fyGHѠT;,X{ߑ{f[Om\&D,ZhIb5x #n!Xh%xۭU %KjY[ T:1oV{0:^W2 hejS,\?BmVT.B.bz<6_意qJb\}ܴs懂=?l7Boxٖ33eq$o&g7M/I7yӽKw4yE}xakپ0]vZ&V-KT ēv9EE*0D&!،1I~v*NZSLyd-Ab%' U/1e)O"H!G2Z}|.lvRR\ x=/p2;'\Qahk=]HDU띡VN"3CmuhRc ZDs-˔:ZKeZ %8 .$]^7eXɻ7.8 ת u@ SUޣ!B=r %,rJ0LSK(ASXmY@խdΦu.zT9I:TZ%oj 6B.u˂7ղ`͟}K9:]gY;aJE1tdl~74P7ꟉQAݨ_XRFX[[t7Q?oF)I#+e=vԦ< v˹7mϊ?t"hXݡ^mV9 R,şك$?z`êY7Gږjf0z-Ö2r"52ޢop fgZDe0{chx2?9j:63 Vmҭti %k+4`]`oԨe`X.`P&je&Wǭ  $Q]lePSd%V(w0jS#`SIhl|Y ^*:boUN9T2[<\ٗX%+vԺuc2FQ&8JN^F67;v!WX'W12 Ax\/s.Vv7*^̑P-o мjPHXQdVE" y QmvaU|~mt]KsG+#1e"pd{IdIYMVbw |յ?oHi`#)VG4" pR3S.RF&Q{a{j~ADAc^'cQ=QJKHORx,Q쾲ܤ{=[u3{/Q +Ef}CnHU;䄡TӢUq:F5m @ P8)t^L EyXIKzdz~|f;թ2)3OziU)4RɽwX}-PMg0WpJK4:K [ZdAk%;B ~~ &lŀzAc!5T7:ڨύKY6ihcA١c2nv0ch-A.n%y#h (~쨎w ?8 \һ^1'=)'ح$}T[Snf}g ޯ/t*D%ŗ{V3!FJG(|"Bx!JFxo`|W</ ?r#ĨMӝx}&ؠmm͝U>X(q5v>pGs7o|8 -sWn>2d;]+>_0h.vJyv1}A$2ݯ"iVMRJ\+ኑ~5L!_2bYјvt/)eMqȞ3+$Ӗ<1<1>g;Nw*+N"@> 2/ƷSnn^mk4Vq؆z ZR|$jVW̲ Zb-=(f|Ĕu\kMj`[& KMErY zj˟aVn3/_6`jrop$Dg:d58[; X/fBNS b1[Ԯo36֨C2/@$FmIkF}NR:!uUo헯/gLy~s oj|`QѬo}g_rj X ?Sf< ک. SX;SW(k/Z+}xxܢS_wt*4vp#b1ePoJmѩGLL9IXᅲcmJ Ϣ69hcF~v_93F#fҡ bvAGLi B--YGJfL3nnvGxʡo@iYcc6vDdv{ٛq#dq#n7$=۸nvIzdaѣP!( 6ևP|mȆb{qVIeIHAmhLs΢潵\CcAQ"$KY ݮ?ha0?帽R7LGܺkl{t础TʢVۤ7PV*mN{ދi9+αU Q}M4~ޟ^VMA\̺1CYԻzC)Hr٧6-8X]Ӭt)~k _|VwuTSkE>O}C:J%3D(d;$سvoDD59w_+D s73ߛRV+tK9 қp6-aW+`KC,ͼ9jMĔs$\|C[j&?K!ۄ C *<oާuIl8he?5|<pӏ?&#~akou*~gJN3iیs7iL~|.gK}p_r܄_F1~Ҳ荷'6Gi* [ZՎMmo"1KNT`ʣ!Cֆ>K8?]:2 p_b[H@ #ȉnԡyWxx VR  nYq:~U-'|Sjx#F<CS2Os1JW5RsL+03yz0Vo9X["='WlwYɎ c)ͩi+Cd[-k1j-:uFl'46OH AsڢC>]*jf70o%Ξ ;|#Chg[+kC;I~fҔ|⸂َԉ^گLJ?pѮ'$Kw{@QJt3\*[NG[)}l<,eHjlF)JDKllfLwc2SIWDj\Bl[lls^lX?Q&4񳯓1*$Y'sU J8 9:!ߐxli hn!o%| V :HP72=Y%` xz@ܺHB4J4{X5!5KX9Ԝ{r&ud͹ $\{|1gU [yToۇ7߳^[5--GzfDV`^b&FM_:j8Bz';o-AFW|͠YCfL\%>dh>D.dܖ*{HAǖoR8Kh q^Lơx$ِNӌ%4$)+bl] 蚤fWsZyٜ-"3b5^PK)A3 [LmKAҗgJ㸍M(Ԗy^%cQÙ&Jd9(r :e፲A=C,OAcؕL1W2ad6+~38 }%$W n\T؆U o !.5xC1v6Ulj~Xq^a= Mdz{(=U(w*YlUwsmb$5*i?/H%>=LpnIcEX?)Ct]u](U.KLK-N[!l'ڟ,ZRTGcxhh\WqExErf{!9ƯQ~2O тcnc 0fÂ;)Wީ^w%D%-q|mO6Z?=Kcܩ؟ro"ܛu ~aKSnwޛq>mgClOesʑG *j 0gjg81 !^nyXUכo&$8):FP!\#]Fep&N[n젵Ճ+A `gE5RmN4چ@ ߻@ST ߠZ.}xl]Oː"!6/Z3-9Vᖚ(R u)2'tTU¤=P~#X)#r$|yRL{]Hdn JXbb%]\3ꔋI]qg-%~;f5J9S^j3$]O5!SNYeXNUnjbsa`\ٔ1+4W<5 qgm+DG:4oM*Rk.gEc*>Ѓ/Xã{EZJ]blq ybҋQi4Lg M-lO j~d:䈅c=BC܏S1h_y"YsZ}]hPGf²%RiQhU(YP%9H_I%5FtιDvH֣(,~E9'h|ME\CO+"q^sou@K1C7wGwqy ׸9vO>iNR$LR}Tp$ 0'ӹ6w>]ts̎r}ƒh"v빑T FK\[阣)8K 5]>olpM醽 ]Ӱ>f23{o =Ӊܦwcc?X.ժ(-1Tm dvK*`P-6u!M>~rcA^DWooa ܋[4G˞ĴfdF\sr{[n Q,%RH%c8W?QBbᥖ?0LUu ie*/"޽ڢѐAw-$R.G1<}BHlf}q=,;:%b}`1oL'cyhyAۄ%5rMex"$Cf%59>e( eZ"1xϱO&ޗѴK;dy5Os}j+ȒÁ3m75F=KΒ7w%lS&~&/K{_?< fI.둻,a1iP"=gԊīCǮ#\yJށZť$>ZM;e g\^pLԙȣ=~;ѝ#=cdP& y2ED،?s,V*/oK,//wf?"XJ$8e=za'}jO Af' xG4]ع; )Z:L.|+V ,ͥ' 0CD]RcùY+T6sڣ_*2S!Ҹ=9 &cBgSP^q\Y8\oMM{ #]$$|hpz+/;W7aq:4Lo@hY|T Kzl7ӣ#7L*@ztG3=Zo†W'{N)5A;; kY"yx) ^F6x &{Ф 9&1NkvG@}n_ /-&(b„v= vMhm]@ 8蒊nک:'[uB N ]C;FƫvLzݸPpB p} tAv/?s4g!O~Cv]Z_\&>>7:a 5FqJڋeFB^kҜ^JZѪO٢WRm;$ .-2k|]A Xk]ޖk9RЂ6"MM*"͌bTH~l5b4t}BKJ.;[?=K<|g=n}'e]e]wlp8òuƤpuKГM(C[Jh79rfe9n㦛w?Hb0$oN!31[%јM.,\I[ĔJD*>砚QX7 Z-%CjZz+UE)Jwo30x}3?H{ VR&US$^7(? 1JޥIRX P\JoML*1|nCP,9mPCb PRؙ˝ؐW mg8j&f4߈"ˁh~gg~Ƶ[3.Y+dQPkyd  ZC}'٭/T,PdGjiI1q¦b7kVk%N-'ju_AK (]tR ^/OuW4GKiTb1.1dH1t7qznA)&kE@\ ^eBh[0RY.!ew~r]$1Eyr= ^]vd% .K<5,LY#`^ҙtw^Կ^p)v)_o$fr9i`ٻH%W<*D^’c}؇#ޗ#+00i7k N44PTUFdF|%ea( 0Y$?Jg;R󈅸kK r‹c*c}K˘giނ`ͼ:W\j+xIW 6X\녣 ״+Cb@I@{tBm#0pIe}4+k84M"0*58G0n0֢㘬]*H A_oWsNxAipv6iEʺoosk .8,scQ5/c%q*w4TK0JIEcJ!/o媾wL;gO٩iPvg/>K>ӽUAh C\uzr~l?asҞ,W3_ɐ +w Ƚ2T&ҷo}#ּ@_DQS Q(bNN`+t4ZGC sS#?pޕ-; Id&"˿Ew1gSY_weq@{HwG9=]L,ڳcj d2 /,XI]9B[@LR&Blf*  IwL2Ȩ0e\gv`p+==X&pEp}Cئٷ]Ix'GgyW>чN%?Vn'h-Y$U- LGZ:N I*Ǡ"3I r2e&9V XF8r4J+,q[vǾgwD)xKjXr&3If'bd9"){+eRN~zY@ %x!`0E̜+)j  dq“V\i;`HJ-Yy8hq0y9[rF<D`A ٟRBq'Q)dm(V2s.AcҪ; NhY%xʣ+ Q{1z3l +c# t7`˄Ѭ:jIӮC !)tI P 3m7M{gV{vgg x[IvN}hUo& *?3=|`5` 5I*fz7" =)9 A"_>E>@ev9VA9;h4$$*XH8Q*'09qm0VE[lYbtL|D;%8jCC´ƺj%´F`(mNui.!Oq=]Gv+9IĕH>Ou3,2b{Z+9SJ5wb.}Њ)SָEw~Q mkBK&mImIFK ; ish%9V9 d.nT]hh9`4o۱zu;!e``RB~K'34KA;TVX|T4Њ WIܐp4_|*HWMGb)9u,IR5YMnP&>G2\qׁUe)k]a2KVyU"pW{&&*&ƵU>y]oZlPgZ IQ3/_:x r&j`dib9d/UuSPJL) "Y+4<'|y$w!oʧP81)+Ty9peN ZW_7q<eBO?-B2X\OȬX藍/Ca%" 'xӽJ/U}6L|5e3a6c !!6B.yZ{%xR`|,>cjԥߞ60k*_nf: j!HpkVߥfU]<"+3PP\mϼG<gf ؇nX$ĤE q19%ZB@y<!G b 2w"5HHʛ&sJRe]Y MRc#Nsv`_#B#{xirL4"VA9-K= "C)J;~Nk'"F!T.ґ9d pP<ѸM(G̝z&u4(W99gi6t*[`חb1Hjkͳ?z fKsgOGNۄ_hk$vX\񖡸z:hBn\`)|7jKӟ1TK1Boi إk1ȍP1;]M<:TUW+]ES$?#F%vu0%#S[ A/ahVC[jlpG[UCn%łDZHKj[/.E"s,7HVY-QF^I-#AU-vZ-LOj9Ws-vV=A-vsE 8tӮrzޏZ, VN/6[g["~s_[%( 믮^{3>}wqd۵,:I5ߡ̹}6UVښ_C0Rl)ӷAA%4|2T7nK7-05T7]1Ъe Fgv>ctvG{&Ձsd՛챓9U {L!wa %PMt!OceB*h4=HbWѶBwCe;Tw)!SJ ֖{.PJ=8!7|e)Yyȥ2o_ς~n7ahT6[2H(Cn"XdJyub͙pSf-[]Z xr8jw0 : GAKu4,2J TqjA%O͵b*!X?k!dw\dv@͘K Ý6;U߳A @G3^}C7DH,V{9d:ʝp+RVX9wn:i#tB.BzGs`= Fc={(; w7{}Ss#n@K9.wcXysv4cPV*b-p4Fi9|sh97Oj|K)4s4rF  xrݍ,{uc7cmTVp"GrhadַmBWۣϼ0U/IV qmAy=2"0ZZUyJsS$}W襦%z5mMz)gZӎ*eiiah(G#p@;iCc ܎&CІVd8qkY *IVloC5;o"ٷ?K;ؙ~{f@13[*]p1)_`ִFИwDIi$u .JV֯H/>il.9=ٗi3ZaV1mϝ⍻>Y AΉ\I|kTh:-CȈē?H8$LsB3r}=aadrrPIp坹\8c䇃_r:{f]\71W7?~_vՑ}_첰p?i^$vrss?вr'<O؄Yo߿ӘEwH9mXE ̇C X`e_gby2I߯(v[gZ5c M"b>\\H8>/g9t,;ZUfa@?x+R|}m;, D^˝0$E@SZsܝկ&/'c?x5YɶѴtAs[Ov ~$l!ɬ@ekj6=U/`$џ9ΎB>}>LO4ln.ScZ-jBTH 3lLACWz2%'/cE;a&/ F zL71mFmǜ/ms>8vC[, :4$PkK(dm[C޼u86~& 'ZP'kQ~bb&R qY5;>Ke#+ʬwNc2 R@/P);3rF]큃-vOd^f~ R]6 |\Hm<5#jo!9UUGn~//p[)BJSZLnb&k:N%2{_4On\ђM.˲AS1+D"bϲ4"{ }Q56獳ŹWi,ߓvr:1;;1vޤ)2hJ12zRd77N_5yX$ >d뒂M蜁. {qQEIG?.Nx* ~&bhi\+'ocެkkkiMaooVyP"Y^$o_@j!^܁$BIeT"$p,΍Fy+heVY6ږ1S/fCgl՞9| ۨ5g4CHb1x##`dE]$n#ܼctv!XU-oB2vX=4VQqNgˁ9D$ʈKg/튮K@vP\. tCHqctfۻz8]8Ye ToJ-_6.E55;A٥V4vy $jb >KZl{R:7{?13Z;&]I$ .zqwR#A=x-ڀy"2 IHSL{g\Ih`%$dHP!3 8LGOi7lJdnqOq]Y,y{@C:1]~˗WسOXFWy"M%7f,12*f( uQ`YUn,|R *82jePxT)Př~d9U bYwJl&\ʗ.g׊ճdK%G s.鬚LU5N>z14ܾsOO?[=h=Ӳ?ŲoY\ 6&yӨWf|0hm!ZY1Ukvm02ȇU\>z_Sw M7胁G?ހձ!P@bЧ 2{&)?a1;20}C/Ǥؘ+ؑӱÚ+ƇgsZPO0[2.p|c9MZ=[,[zs0`‘nh%4[d,(88_xѧ5ӰtӞr sֳrJ[s^#M6 ߌ&[ 팶628ZyXFZwuղK>W"[[\>XGmQԈXe;[*Feץ<؍4b=4JZܗX @j3HJVYuǀD}C >,,XvQ\܀"4S 3Q9AM&4h⡝l#ۃv`j1c68#Pi|sF6h!k}>1xAo9k3mlQD@``YqBT`}aeS靻L J!m1_lZ)Q{L)ܭTas`ȶR *ڊE#/A2Q.Iub7n##W]LKH}L|D"jA(y&z/wS^PVD"¢A_Je֣eJScUYtWɸjk4azxSN:Y\@m䤃+]}̹O>{~lWϸAQL(#7s}QFz?鬮q^u{ocru9w)J^ur]򅤮-B ݆0em]vKWjM`{`5|mi[+KH |窙S-dfaT.`>̴Ͻb!B9(e.= {P:Y$m0 1 q߈l~v!}MueZKG6 T*,D"DDp* =vd* ݠc{aH[}*3x|XsPE}nHpBK8$x43a ?$ [D|X/Xsv`a\s.Hkn|3¥ ڽlhgP)XV/|UXb8jЮA>QnPA jNvbd&9kn|# g8/[f1[D\3 փ3l[Ml8&.GS5;Nv8G&aO)Oi]N<şk)O[̗V`4DǤJV Zf4agar-sIe{Nٞ[߲=K;^˪+|>|jEcJ01A =D- 40ZC;bFWV9" v8TgurmI,e\c5f\c+pY Uh= U4Xsmo ?8 thplHCh ͡ceV4,ܷ9<30K&Q/Xs+k @kwNIÅdVX?FtH~V#TQʯUDljqf4g|Lݽװ`j̧N[L4hفd!;5҆`37{쵱+Z﹨m6sHS_ڇ#d2M`/{c <^L۲ضPr;'p$xn;yGk}wG99߹0ux}$!7rgʙn?JUPLy59-\\Lɒ('t?&`;aFjɵ n^q`}XT69KbYlSo2{QEsr UU!:AIE+V2g *ںOGe}*rJ!.DÐ]S1Tt9YQ13cops&F+H[l؅H|71JU̓uW׸zcIjNOrO>u+98ޔlʑCƨ ԧlЭNiU*)NDa޲k̷\d9HB5|sV6y7/ğ.IMbp:l'ye"jI>sJ%[5tb_lN 1dY>6Ow ĘKPtQmV Ǭ/|C*nd$ 2fX$LU@d#,Y.%[fqœS؅bh7YcP"!(u4(Ez|g0lUor _-\p ձuz_,ۏ.dN+ez 5^P%i-fzБg)gZ۸"}+a|(S4mp3QH%M굒VVrlGEK3ypXyȒ{6nP-וlYm}Mɝ>K4Nq9!瑸}8]omTO"﫝db'9}>7'KBS7n8dXJL 1b}~ sB B]ffG@ edv) /}LWI&ZY)z(\^*o|N,/}CW̙>͆ f}^#$DszV[9:Y(cJ=Tp89p^*NdN5%wdQI$=%)!]'n2Jg`W2t.b}9C1>Dh\̄Gb&()gr4̔PR#Ppr.VBxT\2:eԆNlD]*Ψ{nFc=nUrϭUn2{nC;YPLCBs_ȏú;,4l实 0GW` Oƃ,gŧ6!?To[It@y-rfmgGOjJɌ`ȵZMͣyAϲyc9+g\󁼚DH6my}ͻv~;ls5\edUn=WZ:2YsB+W¹bv;}3c¡Ug`< #Hݚ} p{ta1}d zVkoϟMۉu`4ݦqz7L2ȫ<;LBry.W Z] 0'G%t^Mn?k&2 QNxb@fT iuÒa9WN4sFY'dpqYz/  E p@$sF H^y6"Y*\uu{Ji2yE],SGtC}NA2pE597]'j)(" >"dԊF(m# XkelMܝ(' ,CbT مT]بQM6\4l4OB&X ri0jślJb@Fş< P4" N#)pgG"-s6BB_RΎPg˼-:_[o!=ZHw^2 t7>SiL32nT˒nB>?bxL j#ͩ Jk- /fzxpJT<.|jZ(8 ~[sي}ػ2x/2{.Ed;wz)px;Uf mNvfJ&KEL3(,VMQ&^G:O9`n;(x9%C_6DoH>&_PL J5%wTR/>OI(Ox3HQ&:|x>%D!z1.$Ƹ%^Y"O3z335:2V uaF<[lSN0%>d| 8Kq' "rCuu=$lF9F7s?0Q.5o'o߳D^4DMrVmͪ)d/˚0nW]&? ԕ:l dKƿcEDdRo?k.-\p @HBۇAZG?s`4Y!d0\2@uaSE"0ف[x :\ROLx rmOr\2@? В(\2@Ul'[yM# O E <6D%XpL3`]#TК ?惦83)U_% 5*u@R:!-1Ax1TČ1"$IP#Ƃ $lJlİڣ>ǻ,hN(Th H1z7N0]d=S p |x0%۩<企Ȳ*A+ `-T;ay3~$|hd<};|MdDZvz"Tyx;{vUYYOQg 3- EnѭVŔ[+sT Y~xaCo;x+n,_VAtrc Ň%-7wmDg,vߗ&~Ž跻ngu7Mmp!HThgj Xk[ PhB!s>Drκ]e]|/,IUfOњQpvˁo]OܤĦ# t}vR:l&DYo,W u)vۂL iw[LS*!Q-,!@iq=`yQrXIYw֗,'1(bKM9ށTlаOXErYFBLJ \\l|LqflEɚ>V6\& -]+eoÇt|Ym6)s͓NƓv[Zhed|8C3)4^^M6"|}Uj?^Y;\WqWdMWdM_"͔uLQJVCi0 =6'UG)|WB_xЉ5EVҴt<!"=!C"mJTj_B6#8Cᤖq&e U8ΙG;@DP(5wpfhEIU6n.J/7xDh26RHOg(+z@!ŌjjVD♹&@*G` HA)8"1T`lQm\çdb"i'Y&S^{,Uɵ󓻛)}B4@!" ?a~RK.ZfI9;E6A*df7xWqֳ=n4\v97¥Y~D*gL%o?ʒ)Ei"؛$'ji\ՑmF)koHTp<"_y}X|#&jm}dDR#p.A~B5\5<(\\4RD#c.֬2l`mg* 4*DV,Kf;)J9Ei'h+>0Z]GؽƋSA EK1oّc˅HK4{:oZM,)u"wT`mZt6 5\ jKoP !'-ҬWtNm_BxUU{ %&]_=xҥ9z-bNX]Lr+&Q)M } גDiHeiu;:JINkRn&%dR p1)4 iQ=uz`.\PVBo/͂;n7G^Wʽ)dwM~7UWM GkI̛5SfqH 0Д=塾*]oFWn S@>m&]\A֭,i)ى3˚ᐲRbZEcΜ9UQ˲ڑnE#h 0+LUMۉUQU{iv9bAŒh5^%YR|',"$͌„d$f\n-.pbۆ௣h dI)Q`t,b枠fYq'uڠQv23L=He=繫}%'|k8L,UG\@n*W(xg`YJkBFDV A1`}c5W`}W8Aa_%v0c"aD /rX؛װ>C0ѕ&J1!uQ2A4AJi#D8-c4HCP'(U[&=L=.ÖXbۢ`obx! - JAKk(x&?m6C Jad-~kZ˜@TÎYbnjsZic&w^~q@oq9_oZ!EV.܍^vi7F/}XV8,V5Yؙμv'>Baaqmoq!+!%C.|w`$ ˶n&Qp \S,nk3~V³ ֻ!KAL~O ,ZRAzx8$TG1HI,D"D;˧G1v}c4YB/6`)yK r)zb}8cޠٻa``W;&!åXDX܍e ědn|sm&͇,LdYMfb{P mFƽB+~`+iٛ,ߖ|< ŨH)btHbk[1xHxy@ԺleR9J~3BDnx3rR&YFlve-n7{}oΔ}on~u]p[G ٩HH'@,SzEpXaU@N|@ΚaB!lk LrHZve i(B>;,~ وWC6'> 5,;d aD:PN| ! T ;*A,k"C~ՖWe'>P CIJ.!F 2d]L-!euOBj aD*n|@x(dUβa^Jv5!īձ,Gq!;Ȑ_AEUPщ2 ; TZELgLjIITDM7bYU>XPr  jC6 5g0F[a\ &ut>BX6YPCM/ȡ G#h-vSE_(JuNa Yzs,!n 5^1Ԏ| CO ;CZBPk"C~!+Ս !5,æ(CJתj7|Q97>m?zN2"a?eȄ-Pdhe!-C!;Ȑ_* Au΂ 5Žsu7W4T? J/~[?,٬})\,2>NHzgλ5Ӓ;b(FHsԌ5% T:xB+b:W8RS`ҴL a 朧*U"m:@$zO! SB0iJYj%2rғiZ(I SqBaDSQe*yh0)ܭ"kNL"BA$QB$yLyLZ-b,4[!*%,HL`qe4&\}+욬~a+lލxHߪcg.MY Jo1?W%jA@ek=T R$1G6Z1A8l2dְأ(ː%)M;~S =a-Ã3<cL}kܰ5(4ibI1f0el$}YȊ#ʂZrI81~[iN$!}ҏ)m;ڨ8у7L,VQyNmq"cvAuA^jK]4/h_\Af_@iV G% iG*S,"$e L[ gNXݨC+ ܙo++^C~3ЗӶUgq^Q=u@h8:+gΫS7LD)$UH %Akm&e,b"Y$I8eg_[apsQk[M}5t4M 5LJfz[ZkU ZPKZ>նپxP*ׇBG~Xƫh>Ө+-CLs}ejV}7]r0 U++uZ_h|/zVf@FiF9~C4X?㸽I<%8ftQUhn4l+jr ͪyfz}ek`<Ƴy[aUOzy1Ӱ>;𳋲}Tdz(i1 FJ -̶C4yieOUuD'y {̆s~xܭ"TK-Tۇn}3l,,*3U['ë9̐&uvO8R :hs~S7<o DB*Dss&"´˄őJb(Tt^/}IQPuN(؊~4C%bDyj(Fi yvI=5#kQOgLf[6BJXx glJ(l^c6~Zb-F)k|$BxR6˃Xxs=r&"DA(I!uM 0lT}" Or.eDy3PMfvC}(jjVyd̴]1?hPzbc.Tt5~r6VC΁xra ͏ۿ TcChRUA;-+6SFWFw}LP޲Az;NrYkۺ(^4n[Q{К~O5$U\ۊږHlŸw2QiEZv770﮿{;{f|vޫAkC|2'((RP< J@΅iN$@c%cK2\^+Me `7MAHd唺mG\up5!=YK1zwϿddZ2-yG&q1)68 3Lgqq1$W_|\wo%<1V~^TTWgcr2yY܎4E,\^߿gz6_r~'쁃d`십5ƞxq)DMS(΄{4-vw}_=Hdz/eX;+\]^tۏw7^jqFy拟&1G Wbe\g$Jӛ2lyrE2/\koyp.$k&YUpƃs[iVloaلU)c+D(WFr_#_W_/+I1I(I}1DW t4]|ϊEY0Fcy/wwr#^}q;*u܇Wyëf wmCëeW1 6L*4bMĬW(犌rt %AX ɺ,UjlSÛƯ"C*bWJ̈kUBhPC*%ݙKRVnp%>(J5} YHXIz93M-hS0"WR(TxNP$_x2\M/GpEVEV"KQ{+ -%ЪQm5B,KůquZ' H+{x&NnѺww:Z6r6xmXVc#0"Sblßf`|ƐGPI=7c|`c03$&lrY=Ɔӧ['LU\rA GW.5 ѢpN.pDP"K~(qt`#),Qy ;)DȧߐBccnb"{y e*XQ}ZTp"kd:xyAp6(#-sTs J O9F _-L S̴FwWWz1R;^Jziq/UOyB]4yNaW'On>^_VWrS 4Mg@ &@T&CϡM(2);(n)qeiIFE5nd$-M>}fo{ K `/LX szoT?/_>~; |wf"#(ƸT $Fr("̒B"o#uegRKpF81H< fВyDUahLGgttF;'GgttFGg)glпT׸`!MS%8=TQ&{VjtG-Y3u@C#L0%-_|Omx_f?'_s\*eʒm]nEINQrFbx_c՟i&*荌i|NRLN,ao[sX&ΚIƨK hå{/0 P@ɰj&*?ӑILr@LILrdWmy*^7 m6b 3G*+UNyJa"6\G.'*%G.9rɑK%\jF.9rɑK6d~.IIX'2(LbjC/KrMJwЍ\r?̕;B_Z`#,%}6ft<9ȑE6ef6ȑE,2ا}Tg0&:e.!؄ & 4Y.RFN_ok嚳16DHEa2>߮_џldTh,VPRT {4-Kp%/RQMLQk.KɊFM43DC&sf(2nV ! ȗ`&RZg ^d/傁6!6Z1XTY RX+mjr!ǤA!U mUͦ؟$\KϠmG C 6,ܕ[̨e9YQ_w[kyE;hv3X3JkV{`LM5j`1B;Hcް|3{ؖh2\5:p7sh;P䤻^]J[$iZ4}Zͅ1[z:X֭^Qוϝ&Ce4Z,ŋ1+`4!a=]-<] ,bf'FVXIc%e+J`*SSv ƌKoAObDKY;J֋q[ǯ1BbbDP2_@ Zc,&90fK `,Nirv•(xJ_(z6&c n+DG/mPVOh6ÚnWYE^>Y.ͱ,n[.4{|B`H|G{UE߂l4Β\"Մ1-KӷOgVX8o^eIgy+3_|??v)w-#;ɨ^{ro3N ,k)ЁG31mtVlwIkԢVakTR0LA7֪wTE!>l%r=;6*& o~k9x&Q iL o[d2f:&dFR&eSozgʝ__Kg$>n扷NtoP+({}7!愍=F”~cEu6:)\$eRw)YPElp{0e7݃7٩T_1R6Fqrlë_~eTP|vuKM.Q1oϣ3i%fUb=(qS"t;_$ٕ"^=yӼEOL !ͳc4߃ y)Dx.8eҍEls6L_&?^ެcޯ>)ϩ-0n=f,^N]y[ɠSќ&/-Hũwo^L8zVy`}@b BF B?F"TtӊB_LĹ>(_k=ҭgjr~j=Ni-!\d\vG'0?qu7`p(*6Sm5Ub~2džڐ>%$}e:%3}y)mv^'5}<<|+-Iu"X~wIƀS/Fx}'Zc60iY *cQ$,%[cD `6 )lGep9V O-H$}w%,<\C2ǚ"jhf, К>|bCaOAY6wqP>>Oc=bw5X5|$ yFmBS[&EK QTоB`Og6BRgcsW+`#Gf/Qq3D)]l63A)AhESVzֵ x2w{sA!I7wtcZn\hmJţD[RĄ[@YA^  $bv?/.FMxe?]ŽwErݒ9E_}z;<|(^ǐM34A0 E+ ި䗧FeՐ-D Cuk(Jo} ΏhTKQjj70.u5d4b}pktf EW̑wܚ> }gd~d064x H0cH>՘&yD-0%Gq̉(AԘ=\#58bBv]vJ*O6ԭ3EG'W%GʉmXP [i{zs}+tҢOpmݢu'AK P^W!jSTZ~O  EIl)Jü~}ҷti8'AZ ߵޜubdynyt{TXb6>RF޹F=>A&jqj;v[#8ŚTd?b@/VI0q+.ےJ]jwLMO~JQ$xylON";r,Y}LK$@.1 =v'ߙy16cX~/Gj8jsx-1ZఖϬoJon Φ^GB_ uځi8)֖:j>W J^¿.yϽWucB}#͕#Rk8Zͥoh챠**[}i_Ўyek{L#8塷+ӽ[>c@Z_%ŴBwڼT^=,~B]7y.U߰V53ŬuS CA%M<[Js/T@vNx+ eLj'0JJJTF ?Ҡ89v\ԃdH۫!FB+&e(K2mfd_4:T H 3L KiзIQ(RC؀J[ƌZ67/@ЫCw BȌZUب M3dqÐMrʔJuO19\XZ [&[.!r8Qe>q`AqeZC"s)ӌSo n _]o/ @)3)09Đ&4 8!SΤƌPZ4Z31.(A4Gyp"SB#笉PٮA 4C(gﶵ[BbHq'L&֢KڧJ,P1SE|R :@FC<$sz%xzبy(FnqQz\. DHIJ  rN%Ygw2koܪ!EE;e]H̱w*`՛*%vE9G,)Xs/;eI e_m>jP%,Y±$ hays*s$%A Ҕ[STȴ!ic ,uV8|13'ʼd @s3BI{-Nyi*O)Eɂh0kiS e.qau_/s> 0"[C_7'ws1fɄdM8=>n<ʦr}lJ\0#{~ztsGmR6btqu{5F1E&i Bݭ-*U:3!8|ut_uG_]XO}N3P8Aޑߪx:h~/~0-x'%&N_ۙ8AΚ_P1 kͳbD{X<+t~Chr%ϫ<%<޸x,}NP\+Qm.SF: S6|?U`a\d!/Xx<%#'͂E ݦuїZp|Qt_fl=YTJzqMasy*ohxN|| -^l!tMfx6xen\76g׋jPt;RT)wS۶!Ew)F&/aiE*]ںNT;b ٦{ìZȁgO?@m{;-ZhEZ0~xYI)poL%/twAyJ6W?.,gGճvu [2`H.=EMYm- cXb1A: %(&$b-z^]3T 5C-P!ڇ-b` MN!u{Լx3ap8UP\tUi.q|O`8x:ɻ~nFwiLK^d|Vs͗(0 >NM]1c,Tǫ2`iukJ Q?Vn9ͩ"JRKV¬,]cO=o\rxi^dV«IBx,2K!+Avl4eCa1*ޑךY vzrV{( YgY*c؎ajOCW^Zrvq7 }^¾.ZX$,GLԠ :` :7Җ|N s Lr?˫Uf𪲭:>z@PB:t+Ơ+CY=wjUx#]vCT:!ۖh,\M(Pd`{gn%]qFAmbN婂R4 d$}܃ k2WWڶ ۣ筫pXFmްӿaT ֱ #!=0RM^L7 +PyA4dQX"zE ;dX~=N3Eg ~25m/c[b[ |z/RxHyN@S0L& ( W4))f|H\\> ߸5j:(ޅyXˇy4דo|:2]^=P.I{Tg`(ytYO.6nQb.7/f=,.Ζ_,׀A4FM,3}O^s(8BWzqU*Ti YDqEj@3]6Tjj7z ~[8`aBAJ#czHUV^(;^(Ά5q]/Ldk* 5-5Pqv1N6!&TN%EQ Yj }oɹ@m({lRz&$<:2\w<:#@=.F̥zlAXBŚ _VbL nMZ}ҜDcY\|* 8k-ĚަA.S RU .X{,3(f!lƚp]ۮ&ِz uoxmlF'9LpFSdh.<㎺FlU{](|KEu*0'%QX2oI25藽`#F>!|zfu lm 0~}UG\hoir->SMy˜_ta5mVL${\fdh9.{9;fZBGOƸ0IwEkr (2NFրr=ܗњDUWUVg8P&m`9ƅI!@J)D24J{m;5 8k"{p鐐 p UIYuRkE=NLԯn27I< RzРf/`4ɫ& I vlDh)-Ij(KIuF Ҁqe@XL& e{46ĉlٲW[e4Ǿ5Wr5w|=a>x?rz5<7N`>1tw@ގ0u0\*f8)5nzb{e2YVn?,2;ݢ0-{q>YJ;&QKT~q~vFɄQ5!5?[u?s,~|4hˈ QlL~Y(_QGZa4C=_/ u(+[;IuH$zu;c4pP8{Ƒ.W8.`/#$OBJ߯zHQ#g8Çc5uwկU3lAZeb?V\4ѢBa9ކuCN e)8ؐ F)x&SIF`g(ykd,xCQ*e=z8tA fg%@*$Gw_>u(KH]_#atAi4PO%/ .e7Gi4S5^/~-At UJpHz7gwygO#Eѵaހ nmtсW絘JK#;  \Sҿߖ}зt `Zwtp=ҁۙĜ9p,1` TU&q#9ALK^ D$eZp s4yZKlmhQ+ht@MC9I^_Y-=*n.W'D@7y4kIg=1E*'ʬل@٧2 EZP2iQ 5 j9$[1ie2GR-%%WHo8Vr8xZ\# $ FqFOGɬل)!@J>VDR۹@K~D"E#Qm˙ [J<%^,%Q1k}l@j1v$'r-ZRV+`:p/WV(l!"Y41 `#ZbiT5.^ d S,T/ŭfy)usXt=!I"MX9<ؗkp^T")mS K7@v**+E0x'h8N>O'XDDe8Zz&mNɞ٫bܦx5+?o@.\݁km3e|6Px֔Uߨ?srS|*4B.3Dbd\63IU>i-7l$d̠חVBt"UQ:J`>b0Z6~a3_Zy+xv\Fk G'qy˳)]GooZ`rrr7=s=#O9E $e\q4;8Gx pm!(x cpa+G֗bO?W'ztz2xĢ"d):? u5]Ik 8OK x`h4\ň+hLTBC]r{y鋖r?^ E@ALT:p"=raOѿtMbQB)K#";1b> Z|HirN4,-IT/On3ze'@޲hER> 1c@H2u>;::,h";\4;TY@ ߴUo}hAñRe^n*Cm\ڏǁMH"W2_CCQ)mB]YvVis$sCz9=}R;ʞbۋp=9o"2GAӈ#MwӜpտ7|xkG6ސj탓,8=5L2x6i?5~io']}#0G?'OAY'g񔯌6x&8Ri/;#Fӛ I~Zgk;?ʉpb8"`| ~g? UnBD"O?Xrd3,L RTmEF7ZQb;%2=8[ً8zuiܾ57ח7hnGӿ'FeFF69\d!)5SCF+J7Uѓzri/NfF=+'d#哯K!97T(UÂ!ϾBPgYXy!Et׶<%V~w5q4* k*_haj>0bs'qMIċmċM (bWmo=AiKCNHúR7gڐ4ۋ u?vDn~~-TR\ ˔BO ,E'lxUMUf75@([lZ.θ-WmCcҴu6rAܤݹau M`LF ! -PHk и^SpqS^s6ZS=6ER !)t\Xe=\p)dަm~Wm0Ju[ui5Y| aΕB!lP1r5ͣՙQU@ְT`+@vZYqoZq,DU;u掀YfC燂 ;/ d콰c´2'ݮh~d?M4r#&J4yi0_;4)j$)A+g%Tی\H~ȝҕ8|6z;3`^!~x;9Ňsk蝭^9۔dݶ!0]gJvT=vRgR4زl P1*Z 'c/~ ك9чK+iΏRί/1}sk҆|l#uD>=kuD~d>o]uA>kE~%o_@%lZy4ٳːjQ@CGͤC+de1H+PUFv_<'J4> {Zw'61\p h@ 0 !,jP Z%gƘ!%kN;$(ނQ+vw}a!JYsV3W2f~mklMƻB7.,uv0U2ޗ9A$;[W9;P/s2gI;r9bA(dFƺ$,q=ntBc|H2iҔkKV~@ћ1AJjӛ#7A>8X}qX`khధ]j@ɩzz{,Vxouiu,J[cU3Ձ+>:@x5A!TF`SA)3Nq"D"A:q,,c&}.agg)%wQxyoAz_;+a:Aӽ_wDB$vW6C~J-}l8PY[y匿M^X*l*NqiYXiFTߨWvl\_/b&Yh+*wfoHxZ tc% T5{\m$I8ZrIn'cfvh6&z )9WMJ2-?$E4E议jwJ\({yW+#WG[QWބ1߳.}N-TmArӞ8 qV0X6Vfv%&g6V4gR]p¨0܅#6ԫ{; L/ȠQ+7^4Z>="XkަSB5VoFSwJ\ruwjڍWZTP<<{7=(uHUk676OE~uUK'2euW9/))ݳr덝 П/ٙx4ͭr3ϟq_LR`5 +IJv߃ KRٷDԓK OI {I%J,2)Hq鴗82LuKg 7/⯡q|/Key4<MsGAs9!Måɸ֥suiܚcLhVj Z3+0,1SQո(RivT3ˌ1f .pB5h mבrZ+bbC" z =(UIxyDqQEÀ 7Zlﲻѐmbt;9+!UHrG%upIB>PϬ"N%!`%p &Y[I 6[i֑͖ ]J PMu1c+&v B@[II( }Lh1Ew8qr] Ts8ڽZHaTc{}PSEt ΂j!Z$XX,ILdTj%ijrMy;vf e9UW7 vio[zg=]6-%g8:#>44S&~ ee{ۻm ʦݺ, [ثt6C^v$X|ٱMg(lDeaf̯OV߮]Zv؀ZuJz,%@)M9ϵ ͕4@<V+\WQQq (auGn+H[*`~;BM5 mM(t5@]NCa:y|?|y7,ҥ&PޖǛnߐb%Z^+J8{a-|{$oy>+FvoΊɌJNr3vE8֯{7b iy bX XVi SZǍ:5ؕN䵒z/ta%&WIPaBH*_َI3JlHK =mNG'fJlaY4ɟ\Sf}~>uu 3> R641,SzaaGzmߧ?~^w{az=_ sX%.uShg>EO~p7. |+ct#7}݂a6wvpwoOo&//aƧ߂N'0~]?̀H4H~/3[Lo:WveCP[";f4L*"T(͸Jb, G ʹdi2e?`lMRO̥0GVd>ʠ~b} bJT&\Y9<Њb_Nb8KuϼyO1wtbƧ9/]:r 7Cq~7~ʿ@6}!Xօt*Xd:T詜{;*ʅ=0Gť:PBCRZpc- dEҏ0&}-G.}cP!- Ga{'y4PjJD}Lőa?2OEx=Pѣ{́‹[DQA[$Mn1ŨF-[2%qK8b!!Ȁ0zl2Y7%7gB?kqf}E1aw|3q[-J yފI}+3 ^" 4a 5t yGSjwy\I$;"x@p μBJqQaCAGEhc-v؈BDci6{l@{h~aǣU+jwϕY1;aSnďwۇuh%Eўߧ_P.RL;S:OgȬ,ժע3oՌzyo8A6ZG،V(bg- b@:LW_h7<&\Fւ8n& `!a9ȞU 75>ze[7xOFOM^ {?KCID"fC?WQ\[#9j$5Ywr闓c؞O=*TRj pY[J"YqJ a Y# \Yq*D5V3,i玁)k+$q'uel=h!UTE{f1JB}C" .u7mj:$"t&9"*$uydTlnXQ!!;8$}v'x&ษ%$DZʣDTaJ H-!i` SE"Fǔ(RH+vSi# ZM|Zn!jxI<e%PəjlʝFHX(_%>0 0Q$I;) K"GV sN)>ڷLbS,HF(B\)bj>Ce pN[|1(,MdAƔ4BYmI$4%}fqڤgpq {Wr/HfFeAY~P8.hQ/PPw$Rba+"*M1 `s@Ei5Q)y ; !ƉJ6l32;+\okH^ )jHn2v[T=.)/A'&9a6&b>31F-pMrNi"%u;#n83<5մ3a%%ǍH+gƋ_ n5Sz'jRͼHg>ϽUrן׻p=eWE5Ƴ^Sh]en|38O:EN3,J[8J R A[I/ U fkpEΚezEofElsvr!v'ovmd>Fok7EEU r{ z=LN}gH`Ɇ%cpaf0B{kˀrlF~I:f/-.{ԯIrurvM1n%Dt?0N mٗBUgKmrB"Ł*y 8x [3|,e[`02Ge." lw)SVcZ(>6(ݺLI"%ѯsK.1@l 9W+J 3Ya OG@,/ՠkAH+*| lAVZ$gVllcbI2Yaӳ3 it C#ۃj'z4acLzCr<8x8y{7YJṏ=L̾l `E'Jkظ XGג S*DðZ 0wbDŅp@fxs ]zb퉙J @55n`ɘ_:b0Sh9pNHV g~<2Yegk %0[ĚYRR&Z&{h$-Z-S.o/vֽxݛѥɀX>P96Eԡ9;M ߻CL(\&G[ظlk/Tgp9k-CkJǣ V/ ^zȸ:FK!Q 4wA{3:@ M|D#u>;_r,?3?r(FQ}-\y6_i;8;n. &rIي$ǎ߯ԣ)R7nigbLRdUuuu SN'0;a8L!e,Ȧso,! w`K  1M$MW|.ajtQO{Fzڸ Φ︧> R> Z(K>5'*^&`D!Gϩ=`:E#ZcS;zX8N43@s`H@uap#,C\{^{fd܋b(ؕA[i}0J/#^;O4x<*1P =G^f*x9*&<&482fNu|RI{Nh}VaFCIdu@ʃg %5;d-}82I>LzZ r)dq^c%B1n|Xx尦iRG8Z.5 ѝysLTٹvg|*Xp(f*&QGnn*Bp5R+}w1$@ ꈠc=>Y,^Oiy :d@HmUT' \@Hʚit[_ ) R`&02axUʝ$P؝2tXy8b$FJ0u3-Y" )Fa[/TW-mPX/,ոdz籲O@8~}<\])C=VtM3ʹKzIdEYM+KBd:t:@K Qt$xX+h<|i$Dx<ԯqUC3Q:9ʖJBdM!pDڻbGc( l+bXYPS6J𤹖 -ݘ띆,F(&)bhd\~Yfh@g :F ]$ f[I{.$Jf:R'bD JKl2{V.*%{qv ޏice}2ލ #ʓn$R !]J/!n4?TR'aխz^^OEFXXH =5.Tf K1B" 4>tQˮPz?Oo.pb{2{^H ^e;@b3vkZ ĐrDۊA<8, NN F{$0Q'kFeXFssfx9:br L(voIq+jP;)-w:1;3ą$\(1Hi逅 LP,][$;^-p IACW#Sح8AU@'$Ѓ'l]`T (޽bi =F1 daK /M@5ҹ=0&gf-WMG{$ijK`z"-g!f@ٙ\ԧ@ =v;*h|t,y(pl`0l )v]-C-iM`%Z`J81G=Ut2B!$/*B*z@HKWE: -Kީ; pKf|CIE87%/{)eēInȡ-j![k:FH|$E$D URhz@hw8UxszxjKCqˎ91 O2 m}R nQjḬ|BAh8ݪ,(l<"qXyF|=VyD;,iɞ) 4`2iauw|6~]fo$8 N9JDbie4 !CI PA bAu#)6>$rb9>#  o}}P1w]rࣧĩ,ŘRz7B U&31 apJ3<;]Q^ҕ|&eIr6)!h܎ 7Pl J8ǡRme`vJ2Y:s$˻8@?_ni&LOi}TjS0f3],fH5QmOIgc)A0(vPHDϚۖTV5v;ŭ#(0%Q踿x~P1_-;C+j"~T()߰R죐}zo(Y!Ğp?jH.PKDxI 6@JYuLP=\;5^.zt@, ѷSx>SG}HK<5+iZYu i0zUY>1Uq{`gbi{Tf/hRxPn& luoϖyM~4S?I75}LfJ|.}<XI%p裖)3" !ew&ՇX"X.b{X?>m^%oe^evWK]_xMV&Cb=2d',)NAJFFݜV݈YV |e8+g@/ ۶0pETQdթ$l?TKPK?il~Nc6~ѳF{~kWbwV/-v;}:z|JuN։ڱթCwꟲda7ViD\YhI.q£#ZIB?<Z~&sMGtZHM㝗HN'y@R.'v^̮ԡ _hq3{vOq̢"9FGʿyJl9O?S~:Χ|~ϝ?z)6ÉQ^ų,2y»r}vO['ϳBFW< *Pu*|,KE q˒Im$]i.9vtzK}&QO=׆F+C߻Jy%-mO&?1AFRd<Ll|9H;Rl`7.Q ):;3O,ֽ/twh6\% 80ӏ7oMVo/\Ccd_$F2VPd~#sf5^&P Om$7f9Anj@[oIωnYS3DT ],yMhul/_1i(Yʃt;1 vwn|oO- MCz__KJ(-Wazo!Mt4]o vIIE<;MvQ4F9䬤e]22oYt%O>`GAGų<\wj8EIUбdH9UfuͬvXڦ=汣N%#Ǒ|'WT .El ds}$s9Ţf_p70NFeOoiҧyggMMa ڴ;GkC@w;wfFmkDqVݠ5tqڵ%ಗ*73xhVޚ7T_V6R hv ܿnQj8=jAX̊X1И^oG "%/oѝ&$_وMZF]I+ FZKs1;{2rsWݶ ʲe=|%Z #9^0QRuEqA&NIH;W|Iι̺Xezu^z*gc=[O%0r2/dr?kRïSs,73/cĹ/gXB FyZ6u,:bqlETiwE {iH]r _׏Yw &2s_6q5 6N2GJǛh9h"mf $7լ(] &:X8M؛jvM×hՂ_n1Dة--AErg"0؜~]^1guz%D *DvPxEy]mxOf7 G 1ok[t]FxZQ X } xa$ε$D`ue3;+qn83KY Qj,G?'mELHD0q:"  1?x@$qJ'= zd:@vBq(Jp"+qVEEn/ !'93HgZn'n۱mLI,mZdWU,XU&Y'&*ԏ4RhM$ ) Sɨ<O!XMfU]u+WThדT2C# ~)я2P;,:`Cw^9(E<eVzGh=K@Hi01]ĕIT۫4 =̶){o=({-MWg!vC(x!RS " !>HVGftրatDHz"$ yM4MD;m&aB;a梡*WPpN/{TDSr-}Ԕbdu+\O=.)^I(:ϘuG׬ݘҾY(i3u˦|2(Nz^EZT.LD >\M kXPX@S3E޽}E>:!\vZ[ZIeVmz'VbS%fsΒ1jQa:mqz dƂ^ԫn|zgɅ EߪZ?!UƟԂ }^:tqPK^+Q.v, &l(sb=ZO5~Ba&cS1ُ)}@"8NBI$jĘF pMd>%H/W0(<>[buys.1p{J뎧8^Iަ╼Mgu{a_Id{_YIV`kM&R3<=ͼX+*aU= Ӆ.?D]yuL&퐢??ۡ. EPE\6ṞxS]t!17֏bB#vU=Xhúm0]T |t 02#s= 5%=k=KH(Ƙ\X-a2t)T~(נeViL597[̟x\42|T4DOl)98H,8 L9M\(a1 $aDPs{^,fYwxWw;zX-=flN/}RI?'oמsL< کrdʑq*GErGAiIZ eH TBE򸂁4R]sz]s=o첸%߭jC 2R $ cɂƐD ișD)XC-c zk 7/s80U55 Z-x(_~H_xEh=(18)gP#q 4 "+SSWКш*!POU@BͥѺOOtn)&4h=ڋWy+17/ qvIW/ #62ǩ4x oYBo{@Zv5hd8ПB?p*~g,;5G^MzY౵i83Db"o1 w X?sƃgb`O[.f}zRMOUKu\gz1|Jg6.GgGt#ԟՙ-) Ra)Md5NH۠I@I([t&0jU[joIӲǁzMO>nOGyS}_}ޒ722[y?bz^j`>>q0h편3+(E[SlfnMnY⇳Ӱ Dۘ햽y޷yIYKqH } mW>Fb|B"kHHEA˴}z?jX#3h=,%A!ikřZ9$;,VT[sA{{4cg:NXku3-Io9o3ڇT3Q8ν(G Yrte&b]}Y,[az@ڙ3H29o$F1؜7~k;#xG[}X k i!n'4 :>^0Vi[mj)hw- [3_ﹹ.56~Qim}jz&DR4L1迢_5|{Ν3%7Qsq +uM8B-mGcb  l^w4[Mbo:[6ؚkxK9d?;GrCНyx\E9*N39g-n9_sc:ݟ8vzO󟟠x^dGAIVo$T HlMyi9U^kU{d^OUO,9x 䟭BkA~L`HܴVNΠ5vg/mN[ՀI4`.<:Ǘ%87\q(\PښA^]z:,YC+@TK֒b%2#!9W)vGJj{mEs x}>t{k9p{AD#k3 -3 lt,^•յj]0V iFiZC8Hb9:Nnh?NC:8c5i @P4(P8[5$;9U5;yT:0%u Xb'J`X$? b, av㜟(Cw^AbQL2ާBl4Fjj54R> rj@iAy}P5zVkݿ}R/X.Ǐgr+](< u5`J`&# Sqޕ-> ħ ސ`h("?'(bEEhѕO&G00ú˜Lu!ԙYh, nOV_~<3啢B] *tgNOip}S&^xv\ff7"~!hσ5V1GN‰{;h,W_lH]@,\0{1BJ%&.YjYk}5 j+UMpjN%=УW~Uq~Wl+JRc;製ik)=ڙ/A1Q~rJu(^kT45hҠAܘFH3Xk81$ff1@<)xQ `Iْ'x a4mIγAOC%,WWPt u 5j"9D׈. $Ls 5А !1#s=ZBNi^_u87J1gJd^Xn}0e0s?Qi{)r=Ћǥ '̃i8},n'v=8*NyRx/ok/NoѤQNڻ@nIAPSieMgɦ{#ۄ:aDMȘH B    RLaH$ ɄQ%BB$JŅU & ",ʥ1P P+!αB( c$._ W|U{OFs9l@Mr9X6P ]>cHda("A HI@ 1!|Qe(@8$d 2UѧQL)T&41ū`lҏɤ7/l3F6F_|>( Xfl62~j<χӻe?///Zޮ&q80/y]IL_7]lşWөoxlm",(A7u!H7ܰ3i3I m*vS2rL"8GҦKi۰31X]D W>Cy0L9m{8w,Ec`7nZ킞73oxH(OO3vxڄHl?Uvd]nok6sf1؃²P,55%[Y(bcww $k{Dzf8Ngb2߻E\4J02jO'Zoͫq)-ҟ80I֫]fM  Q)UG$qJ?%R^d[sؘ`P_7j 5T*_2a5 Lct~V=Ýf۶f5e_Z) G\0-|5ZV%Ptz%˭On]/ս2l/,]oW䂑~Ї]/ >-ʒn4r bHyI!{, 6lICu*Y҈m_F[$?/ַOcřbgRfYk@+# O`Ҵ^0h/?J ( BBWzT"X1C+(L~W&(^4mЋ/2GƾWTdB z?(0/0J9NN,ЬbLP,=Zߋx !h4` f,AḴєut*XW!Rl#u@nsˇr uRɬ/>jKLaG+5L&T׭&J8 FX^1HYZA*֋KxHxɷM(?ެ>=5?(}b扠7:6?`Iafg>-!fs.Pd nz7\a`ӻ¿&,O5-gJ)-Q.ZUҐo\EKtˡƢMhcZBdN;:YI-@ޭ[)֭|*ZScu놵lNhJy:bǁL}[bBs[ UH8Ϻ9?kF(ΚfIqc Π J5)| ["Q0kARZ*Z` LCp1(AS$P(-)'eC8a,TjȻYK}})rK;W fhN/'ٰnFt_ʃ<){G76Zt2mVo ͷnhN֭C87f GuJźp[1֭|*N)=wQ'S/*j!!XjT:"'@hɬ%I#,]B6y"> 5mP:Uw_'`%꣈U̮SU_fΗt.\4ij!$o/!jyvjf`~e=8Kg ,Ax=UQN!stHJ > 9BZ?{}#7wڒjJILд?,}< _5퐙-aToo`-/2pK̝V91Ht .2=w9Š0mj#W;aBZE|Rn7ǜ9GP+2 >= 5T) $"Hˠ Sݜ]yr_3~] *~i 'nRRm)cM~}Jr"ߑw~p<-9}7عT?ޜDěG6ɟ G>| Bo߹4,rzş`|ඹ%qZS|<cN(B hE/054ɇ޼p2Ie > Vk/Z\ۧ1K`t`0 N}E:S~Iň }::. ш1݃^7\tsREY +pz)SS}0p*'2E ;]HovcڷtdּU{h0DZ f넶 ni=uTk(j/$ă\"*& Kqe9,cZ轖j  &Jҹ(b@ܡ(FɃgqzE@2]^d&-Vx!V+`&FɐZ=sh@VPk ǘJsLX%GaBzmRŞpŗ5 yD[-WH8Bc4 X4$r,JdAr܂!:5X` ?`K!k0zER]'JHjWfYqqwt"+{Nwzy.C6o{) sJ>?Mu nV!D6.{˕/HD} (OŇMd$H>Ry+[$ 28[QMtlJ}ެH(K e`U6HֽERx "ؙU*O$)VwlRC$wLoRE$X*Iq58[B+?!tRӟV㱣J+1J+ XՕN gIFW^)t+F,U/J h!Lz-< !6bjFߵBS3zֆ-ft'fuguN ir,,Q9ynTo|oŇڦ;,rN6rQCf`PY&X Tq5"(֊>SrLZK]/o<_5{ѡn-47'kcf~ʴfS"ãVH't0U&LHNs5_ew$/]$ 'ƚf<=oRoV[}*VOl=[^V-RQ] Da\hE(,Z:ypudppټ }F}9%.Hyiͧl7GaOú&WRޤv{E|3婙}/n&w9D͛L1?Zs9?OduD|}(`:iMͪ^ǩ_NKLdk>wqdžͧ3`KV=%ǀ2T1ҁ#Ʊ'/'S/:9d޲`CX=˯wY qԫe# Xz-Y_oON`@#d#r S%,s_#d1wx[KP;AV^AuPC E,!"[S{G<ߝe#.FۧQD)VKtoSbd]iH9pqO8!49"%/u[=ݍ-3|o51CzN./1-r")W}<JG$*1,cL]s92_} }sAn̩v[+\7c uRղ9F8XR/ BHjmt#QH/F : T[Qp*1¼U>]xL4 %!&Hcʴ3+.D9K E+1FK8#+<)$Q+P iZbOT7)RX!Ix8畖N_~T&y bEm@ 2Oe]YsG+l̆`fb"fwbNx,ڒ'7 M?$/++*?q\mU(zT*X 3eX02Jǻf_nV9L/~8;&uY_?Ӹ,v_LrW&!ogXco<=[" cX|Ing7X~4j*Mh6g uQV Xim%6<}C4H< BpW~.X0]G܎I ΤNI^Uq_1eREYG!,}k He @>#g κci/I4'$9aDL ScՀMd7tM:(oRx,2;tz; ~Iv?$dC]v/puY Î)%֡}*8Nـgy_K4g9]^ҁ&1&ky* xb9dqvx΂-CVϻ7˧6%!6gv}T8)@(iN%d}?Yw@r$^o 3@ʇ_/p &LAUh|"IƦqo&qR[=\os`Ї`fHoN*GfQV bk_@:ATS^@JAzs|{-4]}sDkInz^| "=Lϥ4u˅I[}aO\].)pR~>M߁TF%,9[Y{fZ.f "C-_)vEt*WB0vLbnQc; }V (rCB 5 -x$YmB@t5+>}}. {!u58i#|LO\`^puz淏y]5q&za\Xq>]ܡV< ͵1kAX@]l:g?zP$P-O07׿]I>[j7٬5g3_WWcׇ>ݝOV7P|dMfvOTKT爠kW)6yXԊY`i VO/b2g{831SUa(/jKD3DP \]F,Ќ(ɡAi­"V( q "'p c^QՔsV aPI]Kd#@&[[MY^:$Ssӷ ÂQDm ƵYZ9KB`͚׶Лas)bP"]rY cC'WMZf{vd|ldnxX٫W,OѩD)Z`Olp7݇ SwBZ5X6rzɇ[+֗Dy[}Q.nX9 hDq?sLړ}Túbf"0ZU*tZjfuީ=_{㭽G nbNأJrީq;  #;9mw ܝzJ2cI}Xe lW)}R;~*c{q.eP&pQ̄o=cH "_fu'J'LLuthHq! 86C(j'Ò(]JK5{G5c)/p㻙_dS17+H':BclB\uXEz8(!Tmaf|W6}~N6MwR9]őp+TaQru$֓ED$G껫"([)rDt6p')Q݊ n}Hȟ\DKd!t.cV/p@Vʃ)v;[29v+&4W!!rݑ))SI,:NN8ٓvr^RU{qrkTsŽ >'{n_}j{r=,67q,'&8y`YacV D򹽽Eզa{缣^)BS**C*#vdoT;YVcbma1J-#ϲ6cfxr LB ƫ Q\kR.Ă>'"]z3<ٯ1-lR6\*4t#{{ V/yPw,両?܃Jxouj'sR:ҲCh没1uy˛d-8> mv~0&8RGVhw}x;(.jg>S]h+Jo"Y0)fr=Xg$ppWfկfj6CJuVQW뱻B U vʤp-az+(iMT1v2gC`H&FKjOVrVgYS&jMJm;n݊RڪqRKҦJgGU1``V `:*Q`^cQ˅"6T 6ԩ"Ac.OD>ifGsN " QAy1 :ՏW;F/!x+~B툪lxMèZ1ia)S <': 0u4;++H+DڙPPۯ= ^JFӔ[㘌}Ay$D,2- h+F dO͹dޡ1sڰ ^b&Kcb ogv H@ЈEp`e`R֋v(0o|0CLXZ2Y i7/0ذ,9& n8QpHEe$xFA%?kL4C-Q0uv, K[ivR`l|ZG X%GJ8fZq@:N S+ k Kl!(n%mZz?VQ;!t @s~ 1 F&K+L4` D f8AþD0K-!8VZH6gMWZ[f]ܖa#庸;^YÕ2做N{RI nTȂ8s9,pĝhpVcGQ/[&^e BŞG(K_C-MM4oޔƇ+/ڨ鷏&F~0" l >Roo uopYU>s~{;=\^ǐZ  !=^!ɋhgS A=ګj?xRQyKHjFݡy8.柫e/y$q; { dzB;J/KZC[^=B!m`૥G&0FX+=*0^!G IZЖW? ךGyUh?X`` ,!UyIuHzU^W8ԣ /pѥ6qAjPp=\LJr &u1yLJ ]7KT>D|]ϔ4CОxJE~9k %5FoЂ.V:z0ol bd4%J9!F)4 G3Yք{8jy@_F0=5ꆬnWh~i!YbxwzR;=^Zfh ,Xh<~-{X$09m8z`2 x9S#mk 3&!S|%~͏/Y3رbTޡ1Hf2H %bngX&% U";Rx@͹iWW g4㵞=mr Gp|$Ȕ'N=0MW0V]oJŝ+hsxm+%NHQvND N';[07ry,Y˳\.pՀc&T1BV:BBi .(I G"LXq|9)N&WUϯ^E'jB6`6݃P f;ه~'ȷ~Y+osvOgML ͱNA Mn-\ګNZ /#+T$ZdJ N`QiPJŽ RZ"!Yog<*K1e0Sm+VOE %8e4VApj svBtVZJήP@HFjM2qg2]q_#DK ̚E39F3t8jA>-3ϐ9`nKNji~V E[qRRC"y/A5B^QX8dOYW$<צRPJm5}j\n5=g>R)w?Y0Rzmuo{ҫ ((L u\b 7>ܳ= &C[hA|k:d*ې@|I>ua 0@P Rh6PRYP]nl; Ӟ3#ݯdd1g{Wg"OO&>Y.jʛpvrF<\~k^|kAre+^tOpewC]j|e&)M \`H턅ȬL=qh%|J@\UPoǤ.ՒdCh<8͜wx0OiA:k|[B$stJ=̯R~bAMPD*a:%:t~!QYQfԚV(!P(Hqʕy0l!C`vQl92öFHA!e3W/#~2YX/'B2F\Dyo7Q(/@N]iRzP^J*g m-$ H6{VPݒg"QXP) 4|LT"i'S!.G 0TdlȈ_E)@s16rMb%u d39{L[N\#ƋvmD/"fʆoӣ:n˝OOB9}[_р~ Hf$aDF': ) {2oFR48 c,|3 1$J_H홾HS 5R0(NՏ|V@?yvEV9yݛg;m7:M}p9."R#QCڇ l=@ݓ@J͋% dPCԿlɺ>PB_>ތWpg3\–\`¿c:B4}~gOY--k2kM:қo4(! 3)HAP E:!Y9Pd~9C6oA?Q׹9x׌6 }ۋ114Vx=33B@@qJKEL ŸMFcah0p43"hїoeHu~(0NMhASNV$xJ(pZpʧx$_nShQ)Kg%okXQ4^(JRZR&fj\=t"`V((Lzl f=!.Z.>YwZyQZj~x>kU+|zR:V]7ND#t bU|fF_El>_;K*|:c_\^-kk,/1dۛt$צ;3OUV_fq!OXM']8QW7`BǍ.Tˊd!="S:-8dͮ- _?!q 9:{x1ҫC2  L Quh2Wu90@?0On&Fx;P<ҁ% L xWvXo&)~9pe*> ';įx0Z ~%{-o~KD)Yoz-p4Bq ßB9ysrs3<:D|&x܂[}%y3,vy>m-XבG9㓭8Z/? )l=w SB7:[7SygksO|"rO;oQiѷ8j]2$q.V촭櫊}G,XAcS#/fhXnl2Jb4K>TJ hzC t2 F\Lo-`/xṕ3QoQ [D2[b)˄MjCp S;}R6/S2mv.;WH W/e'EF4ÁiTVUu֙?Rh ..Ho5Z[ѽpb-2RJ)`WWxW"NPr5&&h_"i ˞wC'ݷw;l#;/W(t{9I00[,iWRq8Ʒ9ߜA-HRoNׯ )U\[=< XI}h"g-a"ş(R^xq:Jh}Do@]YM-+ x=&mІgR~'p P<L3k~=O7iGe 9H< ZAGBV:JS3 0VRYzd qBւFdWyC [(Ci}0@ ) J1g@%D(ZK0_FeKJ+H(T%j>vdkPٓ /zd٫'!g@kEQ6zV zyd] z#@\ɍ- Afm͚֜m1͛&w1Ee(b%(#CpSKb)=`m'''93 =L-XPrt?poV⫮ZM>)_ ŷD9//s:z':pdž y{xiA)(YǨ\6կɼD?V$DW6U-?{WHV0;z4лcag`𴅶eCA%SL1O{0r}d0 Fd7*H_en?^nSh&l" v/"|?ǝf~wryT%0XEMBu$ʻ,2E!ʼtmoƓ&삓MN6-J:WѹכJ_'ȍUԮ|%vؘn S5lѫt*eV^&_nk,rFN;-Z8,?^쉂^h ƈTf_'Ro U~0 I%/V[p@&X[nqk];)ѵ3ٝf@9~ծΊ\0ŵ2@94ij$kа{±X:mDS萓a96XcvV#LTfկ§\>1\u/=J0^)Rb 3C;(6J$y)BD9^>n|׹hw%+ץތ̃ RY<~u}A>~˹֟t&ݔ|˓QT  -ա᭿'PA8@ > uAH,VAQctF]N[#$"X`DOZ_;{85÷ woz* *xjޜtd@"I3V$ƞ΋Z{V5O9ƙdDi%hS`ّ66v̱e9i{VZq .=ݒ"P 9NBLnui٫}"I'|m^eXyVh!jya *0A<КQ iQÞ qY6-J|2oe!}K;7#"o,)yCXLC7faH7Yu3#/㌼32\W`ӻPaZ(ͥFT+nBIF-@d9 @|yJZ'޾Lmhu#="FSO$k{:ɍRr* ( .߷|W@%Zx'H愡Ŧ"1y9xt@%_c2_Z&[mgqdD8 9 RH׀XBlp;ѰI>EgmUw/ @!&;Qo~*!D!Ν}zQyi#"&{4ph:_x1d$.epp=w ^Md!Sg" !2@0@$1,tÚ#|ʂsp˘lͻhQ/:;>4P _A7T%A@{,-ؘ1.XD:Ooߎ@,0+gFni5tއCZ:qn<~MF35O|v<7+b&tL<_LU1!8~I~Hd& ?_f_Z/Y'KÏ d. ?17Y<?!(䘪d&|T(j%2$a3BhE%7B:}Q8]V8_$H(@ *lM>:E8:p}Xff33fa/vtef^O$2Lgą(Sm!.D{q!C!kΚ^Lpcfbnmnkx9j =^BxUVݗCx^,QBb6!$3 FD?,>NAU{aFy^ uŽ4'$brEcZ(Vkp֟uܸ|>A f 7(kX5Tmiosz 7kntK {f?-|zӯS_>-@sym_feNݽ5jXUq*™[χ5ޤhr#H6gdVn>gz^C,53ބCϭBGU=ө6SC$w4Vvur>ɥ  sѓG:zrE#s%d'L]gL'^ñPwK8mAۓA(i"SVh:[_>6Ygֻ{ۣkt.~~ѭtVmwCj<{wC偎t+² :2|-aߤq-eT:V0 ->ItU֙<܍Wd4s&=(wѹ^.%p[2샛 !-?0d8:{(0(=u oddZ^( $B4 KHmB`P FyDi-Q0Ct Qd$IFq&`գ(5-Lj/(H"[Z(X̶֦ǩQ8giqQ(gm36Rp\h>Q0txC a}Į"5"ulEHF!ǣܸp._ЩkKCK!Y[5SW\ǀĿ,M l!4Upr^޺Ub]}$"Lw?[b1ݺٲ p9! keݧ\~)mzs*JBu$ʻxL(!ʼxjjolr2yvZfwMa@F{ljTȀ]!18b L aVQQp>n>>ЮG5|Oc`hY:g;y-n6 dkL:]>B$ ]J`ǹb=T V}1{D[Qq!䥝% Yb>"7Zaе8XA :Vwj_]]H[d%4%t< ZȽ5@ ?ӊ/d1:P1˯*rWAg\=rVo:-+ĐL@2T<#\;M%A*/Sg]&$u20ӧ0JrAsa{92  Zf{yNH{T~W2䁈WFEb' `=6|A\#ɉhQR.!P X{l$Et)E˳9hx1uJsaǒ>fRI:HN(m;y. aJc;"xsV3?P.OvOPN!ŀ m[&kR~!2`&l!ʳU4`FN%njQyHپv]5P%)pYPa2s8,P D-orJ"#MgE rj>@^_/oǓN?NbSwonقOח)n4fAqU$kt̥K.ϷP\?_}p/cuc_ߏߌˌp](njOޫ (d(yF !`[&[.$5BP 4gjF$&I!S-a{^n?>"sYeg?#LNFT@iO:xNYWiqI`kBJJ4--q)@|U)@ 5hȅbJ(,|Z = tK{X}  z$H0=iY G^ wm$_a7%ue.fq_-W?m IyT~hFe9feK"A/ׇ_pmks_I1+pRT:DƞqCƑ37ă 2>i [Pj=I A eAF`TL#q)gbIxti<<`sIGkoeJYS3/tj՟\8_ڤdX9Z@ddO»$ςiA iA!6 R1Tgnh=דjA7Wjڄ:?ߞVuh{ I'- RF>X> SԢ/jxRSjOq)Wө[WルɝN>~}0ɍ> hG7f^b4>e}֊fJRRε5dH|NJL++Ct.++jVYVˈ^TCC &k0Lu^lঅ كzZcbg_1[ygJdԋlԃa8V4<Z9N'ˉҫW˫5Y~=D9aHc88< {ņX!sLL 0ʙ >"@& y\EAs x^PJHJHT(SYC"T fzEI6ӧIт2%v"h;HjJ(1R=n(B}HXNMo.-]IG,M> 2>TAhAk-sԂ!]gh)lT|L$^^.(\RGqpT<; I Ә_Qc95 C1D?jÄ:ԡ..&Nc7P3 U4.R2hJ6WY)Cy͞FCt4y❎-Zk5'p<&}Hŧ *Q_M>_?p˒sfؓ<`0=./h`;8I Lk7,_>6xI˚>)p_63MLJ(z6;pk;?<Nҍ`jb-\ ɶ9C /G8BV\ɦrJυIH w|2[wZ؀5KOylܻ#[.N#[vJ**+*CF *IdYH &ٖ!+8ugz }b}v,;@;r0vd=_Za|6&6tz'wB@@Y|uE嚥">0ϥ`trN.Obvcԩ nDɌ7eoǵZQ2n5+FȘwh#3Y2}8;{=8̮z\ֆ@|p&ᨶ·Ga<:RRr<1yzV{樽 =qP%}7ׇ[= %5/K-U{7;8OLU-u}wB'=7ѻM &2Y?Dz9khn7,`S``jf:/nFc_]F˄tޥn- n;_ Ԃ SuAT嚅}^SաYkVD޳W"%AbBurI;ɓ\94G[+}\ LVޱ]ݡ AŋFz<#.F??iw:SWMԡ㽏N)nK6ƵGmORj+%e,7.FyrBG/RW2 &YX_ 0yymN\d.ߺ=G0Jsvk: zʳFroT+3hruY|h_@Ya\՗8)S۹rtU[|nAE:ZddPdRqD S@ajEƠd]-g\Ω`lZ+c|keA%[Dde9p#ϸeU>6y}fXnb}AS!{}$K8IVMIV-PFYS< #h;52[\wNgfWkg'F_8[\Ya;[B!V-s]_5ycuˎxeϒzd#=61X:Vo&!nBg-pR[71 1K@zjaKeXb ñ&BXxB Ƚ4r F$^fi0Di7JV.Vݬ\`<ȦT;Zua뺰V@ MtH߆/HYXj&7&gTȚE0(h\(2!sIì,-eFDFt*s0)o&B~5a a_dHDl`d29OdQIY3F#}ŢDHIB";"NLDwMktm‘Tv}*-gߗ<ȕ81㴐J8q9e*slIFw8y{A GLSzW `U1^r'F ~ -x27NO)B8>Wy53`OwNxp\F99)¢n{pv'䡖ȥ (xpy-i#lM}S*[i nِzyiaf +5u=yõݿ 񫱨3gQ .pJlj >e5 }f.CLz)JCf)MZ)BlZ&ue==!n^}# C[Ƚ?dB~AƸWwE>֙n_9[9_v Z&_p]#΂H*y^ NGt&vGS uYbz\ KMݨK%҈~*RltDކzfĢyhXG]hNXSjG ($J&BX.:rhJVJJiؘr<$:Z1yWJ>셵qT|Tŋmzaj=6}=3nIVay u?~ c?%.d\xKM5e}>ZI ૓ ɇCP=/ o{3 h@ ]T =ܼ4s{naD 1Ët6PKYyԌ ;Ub&%P/M0 h^hX]>:dɘ'sL*dJ7 ֦` {X;erz*:?mYB_fM}3`KDuJPc'WMRl6I}986EU;]]O/S!Z}9K}Ӿՠ Wyl'!{rԝ<ٵoK0jiV {oo]|KLtww.&6i_j8mu{ b^xlB-:L1.:1?ft; Z@)+ٷt~*fDSSkkCv1>|J|$0>*8Cċv|r,7ztuY]d>v,Ů&1[ B%(NA='/>~v:Vy8Eom! oM1Ⴛ!7acTQ&p K]}g%R.-q΢xJ㓷 tkiGvD{OhBioҭ1ҭ Y OQ,OܖnSnMq:M#ݎEhToҭ1ҭ Y`ҋ:22ث2'ؘ>L-lGQ|E$0Yc$5r(EJXh7zV^qJ̥GZư8jZRR,؞B7~Ⰹer$hC3S R>{?W;;*.aqd}$u[9ˬj +rlFCN O :B_[G}3߲FK!Vq"Ke&̮|մv7pHd"*Uklo$Z%m'!l\zT swTDՋQsB>1cpLr HiLKH+p" 4 A8k'kVZ*6=8|| N!X ΏOgPrxxEmíߎbQl;oGe(md9a܁^Fɉ`9' 9ֹ@8نkQ#PdDJ#NpD]pϭnM=i!uQpKHJzJNiAJ-q}s0ѱSn88?U=7fߌbQMپ 4h#tAfŘSǵX8{@+!l_4ݨ}t]/kfhsZS%3 9rɕ9vIODiOM|˾S)ͧܔkX4 ˘LwJ|l|()ŜaY5U-ձ0E4) 2OyάHl:XI,q`k%9P<,S%aC`I0zje%}ֳdPV),Ɓ3ĩpN"CA!bARa^n" 9\n"0v4?1bD f ?M |1p`ɖ4E{Ou AmˎC[y4ҪGȻZJ,=pˮN-[B`{[ DETbG{- !=$09[1￾<%};/l $_? n\acner utyLc\! `-,ڵ\$4EY[i0uvpޥ55BKހ55qffR-ʂX\HNX1!0E? )L: 1;a : *P c3^KgnwpTIA+wܖxD#8Ԓ >[I@^z=H!fXaD2>l3  {]F|5^ cIyJz݀z5izZL3iǜ XSUNxp|P'BL5 d$Fg.Aon"&sljs [ gƙTtv6RC9 |)dΔI-A`?0ʘ8mwȴ⤅׷3?l!xurIqSwj~V`L泻?llvr@VʛXbx0wꗊ[og}n>c c!IjKM՝Ga$SX$񟢒=6! '$9$)5|96I$ʎ@v;8dLʬ S}2I+;ΗlS4*.Ka Ÿ M^ԈG<)0cKfǎn../&|[$#*m^)x˿(z;(>쇻웂pϒ/^PltL'HbysK )\4B%BS. xF2e/C\[hĆ{5f:Њ ɝ̍f aJ3gh M7jT.tMI [_"*ݴ9erkEbE3+arr(*"*T=Kk@P-[of%aqgL3_0r-dDd!VLRzN˜a#)4<9l -D@C= ~Z(171Ϋ̖X3c[ \s jCtsy95D9/}sGiGCaZ(~1Ufcf`n/;${_;CP7A9ɿNKd)4pX;c(4eő}82! 7py+~BYɭmK%MK.hRVIWӵk~M.1nqN'N 0W3Nx|g*IRK"\&&Tޑ@tcuu'? BX Sjr?q4d*cLeRAm.]*1IA=r`*1D{T(Z4兟{2g;tU]>l/ me*m=J"̞=3H"- ^xQ1v7]l4ϖ?lnow᭓ԛ:\q5`؎灍 %&[}Tp )ؤT,K͢6iOAŴw0bOogׁǃ@P,T)FFB`9up?aWE W̋ʼn6ʠVnYKlUX$ۯRvENJ#Hȅnr]FaSJ&EظĦqEh&%gALj/#L$/{ á!ȎtR.r~!~+ûqD]'y?E. M=;vx%wjWb1p/C*&޹E [7.M_~jBufbƾ&m3x`75$yg,(Rt=[OArvsXJ'¡ݶ|X+ba{8khvfBf;|׳G 9pxޮr6) DktB ^Nb&dpj"S`ϑF>G[\%mGzbCS;b2< ״FK _0-rV.w0kb'm/ŠKL+Hͳ7(+d-]\PBҥTd|8-9kh; _!:dab\vĈӆ#T'e9ra8QꔥhrI7XO3ՆZi#Ne9tta245|+U'l"T0}q?ֆ,DW*S*?_@xѤKټ٢߿ߥr:M׸|9t$]ǝ 2Uʩwr]zW՜zd|7hYHo1jmWPpDƩv⦩c#QSɅ =A1.<04b ?Lt5CIJ7?>O`w@~zY(S+?dSJ;5j!ΜT98 z!ZH LwHޖX/z|*cʆh.UjA)+ϊ+TɅVh>m^/*pV~<3KqÉ i%rJZR Ek#0k&m4 / #jeZV ~ z<2kU/ 6Ā*d.&7) 繌ROxyuЬ+9 ei 0us**4 eDX>p]k(x#@r ߝ}ɬHbD2g}0йpiKmLGK .HxKUR&[0B*kP_Kh j$Z%+ІKhtXrQ~gkW ;1 ckWHrEa jÆ*m FQ _d)LD-pAJZT`A8ãr,.@T @,GQ͉X INVܹ٠yyr8i0-×gi_P^ͮJ.p`{kb$RSA n*S'QD8W*9mSByNک7G47ڭ"vhĈZSt95 TA:җ4͡=9A_OW& Ji 8bZ6SUYIָG2&(OhQ )DSp,lT}>(RZYH-%I"imI:)j|$55G7kY|6=GA'"}jvEDd=Ք2+r@K+lS+(%<[È*dc]Pi7TD"/q`R$N%u]E^vk8m 2ѧ i]QНRu,odL)娍t 䂢u PSmd2ďIq Tf/Mkm-dڬ 6x54SL2C Q{ZPK*/UQy%$ 7ֻHsv7;%nŎ<'m[%:_3HRR\3Xg2{ˑNePfD֡Rɞ Fąh97+nk<be(oJo}i`q2XWfbFg܋y4O^>}u@ meƃGU.+M l_q˸}ûgDž Jhn}j“7RJFMdJ%˻/#t U')uMz -J$y\e֣#.1eg =oqa` "0m3=ċ2D8c)r_y?5'a*boQOf 'ǂt :SBV[Y`%iJ.) --eGv17FI$ eʳy30_fa}"JJn$BshiZ7v[D$0c$/lǛٗaOS4BCF8%YPBM.BY |kq̲a$z+Y-;8~pyIjL *jD$:iH)/I\w헓J}o㞖e_K6N+@rþ^W>Bx>Hrpo'89dS˳'#}62xgeu9tib? F]'˧J*/5`Zty9G@ @EY=ʲ/Պmr/?teὣc\{ YXx +b:y /pzҬsW mMugh7Ht3jNWE7 mYt\żV涐C(?dO/7WYT^΋bY]zW >7$ &:d#GưMs{'<; `ƺO!TdWw;YX[R#5sȿ*ӭxs#O]]o94E?# I7D x^? ZUMumnͥx,H1*ߺmIBq_Pw6児xDq An;<*xmL~,kNVDcs),R4QSsSph}Gui c:4 Q<Rn8uAgTh鐼eZndbޛhp?*O޵Ń0͒Ru\;-lob55qHgWJ}$OiEk.KdL&l7r GYƔM\ZkZ^E V Ƙ [f2NG w(0x_"%ꍅa{"pHiGFNu3V Ƙ D(VL$Wzծ,N 7h" ulDSqY?DiS[ U<9HƈtYg.y&qp0]ti68?b,G/RZyj c{==/c$4JOBTΐ$y>-I^.ϖl筐=ǚ' /:(evCC!j nth7mmy1z6"g`/x&/'\rҔYvy׿r.yc%mVڤl] r/HyUd{${.(6!ز^F} 7t^ɕ})r>K=6/W6 $qj}fz5j-wSb;OU"Z')+Y:jIiv3ڮsR]qeDNJS Z<5RD馓 o\J5va/Ԁ$]=ۯs1|xrW l:ǽ6QLM[J }j RH.>s@q9k]^|;';x =S,V_9bXN^Z<-{JVkuxָ(tsK2D޹yRHt.͓lq$I(iSA̧ÇU)Z|I_hBt l0.՞G01R)(I+r PH"ȸQȤi99 mM\H&LA"r-4ݥ[>Vhn$x0`#TW)3qwGXNkh [X c=#BBz UmWVP-9}Hty ɷκXU¥z'L a)lޥI\kPփ*9 !EV,ԇ&bHߙ?ECp H7K*/W^oTl {WJ9ۚj`:ֽ a &lTTnDJJP!l/YRq EQK%k"*.-¨#JcќҸvME`2"dcz.ed}MYd(t1I]dՔinڷNQ9qS|w˴ 1OÞ ]|vCs?̹E֤gYTLL6+{LfnQT^Do<[/Wb܎g3;oY[<|2ӗ!T(ȧtMZ0ma>Y'0-He75eRϫtEi2o5U ,H_!I+&OQ`/'UsjDF7};6 /Z⯅;4CMHpL#K _ [4X=U؝-# |){To;^ h7iw\yx`]|6gs:Ak2bZI 1Ǡ_lO^iDљ|s,It)M3u~ʳYqO,VD3:?=2,nY}֌!.~ČܑB0u1:`s 5`cpM gJVa3BYGioJ殙h\+l[כZv?\PtZls>.P}H|9SPߩ1{KL7#PӃz3oT8xW {^MԔTJC5ܛ!Ӧ\A# .yk%AuAϦ4r~5ߪm|W_©oץ'BB.UX\OWxh'Aa,cLQԛi!};@"A,+ K&6Ct_W.3nvV'N/qB++Ϻv|q;vpʻ‚>ZXM;2 kqhkIgiެ/yړn\n}xnZ]d 1&4fsӕ:@m$uUY-K댷r^ncztMf+ۡ,דvvOB;{[H!`gOp ؒ) ^H/EK :|̛61ݷ`9Ԁ02qCߩ(_:F,bH;I,zX^hLF\{ y]rvH9zAirشFڊNgsFz2$&r(!HdDd9qNV[15Rgb?צ*Ew]t9TBV'M )'S+p1q.E?8R0gYO3/ gЧ40PLp 8H ts<`[?V8&="AsnRH@:&,EޏY-ߐ[qD81-zX'5%3lvdӫ_]s)z$eHiFɝytzѩU@]!KJyhHغ0-8|=Od g(hLpR G5_;۔ *:0^27ERiQ2 Ză`X *KqH'&:`(W/{mQG7(LsƧ ).tI&1uFg89"ċ)+3WY)`rC_VyE1 ͽϙ7b*hb#ͺifG俘fyBi˘$6 x]z|#ň3$mĬtG)GDwW.|&ΓǗ-Sfndߺޑs}yõO.[_6&qRWgY~K_Ӕlwc?$&sq9c7L`e)LQ R~'4Û<7 +n)B^|H3 {mCpC!aT!j#'YYL≒ņ$9mdLĂ2 MZ Gj,%|:!P%i&Ww*}f.01'hL?MzT]M@STn7F_7w J*}V"6~q4qQ,?=T׿j[2Zc%VֲR;=˳Ԑu;W3 %U#(Vi.QYBΒK E,T2'6U3]9plt˻;򶛟OMTl0yTjUǮFQ2Ҕ HEtJg )cko2Ι ZȺ2_F7k -ٛ*TPDFѶV;dL/9`0FAI("2krϾF^[^Wqh ڧ5Alz1 3/s6w'<z3JkDENaJلcmF7:ʚhf4LM!GLL R8!ԉGH2 BJXBvF4*#Ղ%aV\Dr!؋G.lp+s= l9HNcv!JJ-ފ(g-Pa%̾Jp[ΟA(@~uv췭>Bp(-RS dؚ1y5FIiE:r͵)"X6HzW 9@0lV2g[‚l̈́zGHC &vW\sEDɎ2\kydcTNd'I6;rTK4Adp(gd"|͵)֓u,b p^2LLFm>=g5tKĚ! Ƒj d:=W9ZkA]uY~,"H +:sY*\;SkU $ )y[a)b-7.`%Ipy&%-9%xI&xpYK)bV E=jìʖYX iy[p1E,(q#Ěj.>n̔3ˠ:HyĬqO -3I;RG. dRV)^KOK!BiQ*1*C|tjV41a1l`cm̔v(PD]p=(t!85VuM+SxऔLٍӴI!B5+ͮ  d6 ƴR au>C{>kjo.w1e$y0>i^)3^j40PiڌWŊ /G#[jȸ%kyG؈[%nL|2 RΎ8m"Ol9v2eg骚SdLSn5]TFs'Z/{j *5 #k_ åen>./|E&A!uY3Y"Iar8S|̜N ԛL=^ Yc2fc%!KA}z;T1'b!`p[N*u~1ú6R< ZYr$WyKjj3 Wje*1%Svp#pc~k=Y67+<˩jW^11T=˥F46g*xE½i|?juU{ՠ[c RCrj&(3"K*=JlqI]26ЊJi.'{ 1G'S.Hq41}G\d~h_eZؽXVl$ibٖ ,mm<\P'sN!iYAn6huz1%]`!h3c(%Bϲ\ A,M$R)?7ѷ6T1K K$S2;+fK0}- Aӥ#ՊD>n*-xKqӟy,w?t}Uz%l!I%P$YېTH/a$rHr]iGJna͓LӿVj e3[+ymN!'y"$q/Lj4ۇ0:%>i;;\ n|  /EX|Ldiϡʹ+;h* qbE#y^y OMAqzP&nٕ4'_$kiehEvȻ$pH8#m"s.E]ΤbH֟Ac9j_>9V*%'r= 0; 2=tsh5{F[ zNU((n- 뻇W(;v"gf g-z7{׬`Wڗ+UxkրFnV}滛}1[}W;[=/򖾙HKdOhuDۃNSK[Dܮ$2LcaZ0q٭8r$rPg[ΓB|T~сfԁ6=`!tbϣ(+Flž%**,^z7#>_XfųM,㻦֛W8_<*jgw~&N9[*1զ#+/וfRܨ"mŽSCi\Rd(uU =qK%A'!WUȕr$-8`Iox\b1Px8!J%k[T(&y28CuQhQ10Bq2 8抡 x`b|H--\ᴫ!qօ,vWY6(eKsCh!r kJy3&|辍PDZ8YuSãXY9^xAq0L Վ Ϣ3̂]p#LG.̷@P ҉idI+m.P1GWޔI;)@2_g|#smh*4 Ci~(`F8;GM?v9ӽ^r쵄z U=G(Tɘ?Aj;K[וQ_Uy~oG)  -4<@˷` tr}Cil'J -Y 5Ei5iS⪮{ `ze]EϡF"XhļP_OBC4IKnܧ+~Pr ]-VӳYyrlda;AFhlp&pbcjtsD jN#L&8F1qb֪ri.!)KJ,G}y,L(&p)=KtcQl'Gls7U1yAl3Ҷ' ?.Es`# XzShJ5f \l8IH7ɜpj-OyYɰ/2䍳ڤNfG$/NURQIt9z9';vpé72 jmQmCtM12󝟖YmS8tV; uͫu~B7g{PoJ{yu{Y΋.ӇlaoWA8'X~ՕY~D!h*aj\[W3f~q;w@>ֿ!{˥3?rXiJBL/P;Ґ\E3tb4v+wӒ1x\ĨN;RxW1o8һu!_fGM5re:Hnc"r0?7һu!_Shȹ;<ťOĔd^N-*kd㘬GZާ9[?] f6fr]k̈!; D sO{[:@{Py[@LƇɠ4&,@bRƁB"7Ft!bcyU. 9ʑHwc LyGC]ޜ=|5}67cab%g:d?Gx!a:m?ǣ[yxu8]w&EyW#w L|B<c15 Kƹa+rLD0k=+j(F u5FAH0Y';j)*]P{)Ăb8ȰL΃k:b>dF/H3+QWbP\M\ 8sgA\['j,y1;5"XhC0,ggKg ;^u/.z/gd>!8:J26{bI7F$أ~u! P-/pu\O9Sۤ'ki^|2[J*5CtNPWR /pZ(-]:]dYflmnr%/XYGղvTR+ WNXO[ U*,(kPj)́DМm;XdwT4H[,ee`\/Kadi3ӖU,^BhHΔ$*"Ghtcllmnm*ڢhj!,G=vYh8vuvQ Z)7Z5%E d\[~̊>m@kkxW<Φ0)mg%"2 vjX2lWFU(xkUkJy$ԆUBUWU%nڸ)T&)c+PX &_>8,\dS_qldp tś%w?/>mLO(E?J#K({gtxQܸ?߀1R^L(+'8mĤ N6Z&òw݈[1ώA񎗇J.ky^@3}UA24F U~ `R>4r7[$pH.T[kSL 1^&<7!3(k_B&􎈔2f.ϭc[T]9ZWJ.95A{tE@x‹”Y-SmNy}ʖe]#~jq! $ u?Hg-'[0-AJU,du3 NRxrJ:UΆH;6!󈵀J5w5'#4>^RDE E-V֫h 96"ޘEE8cP?_DxNjPoZXY@YQ#8 h }c>mmZ[S&6]`!Pf+%$j(EIa-Ɩ 1Y#<…bWa%%`a- ]aW*l Kn=TvZ#(D?D({ad GE)~҈Q) _:5.J٪Vv0UVˢ8_*Ov}%J֭FdSeMH,_F*,5ͫN.Aw5Cq>~;ÿA[w;@Fp0oH{ T&r^RL;ۍ<^VS̓29~ĝŸk.O޵N<#koh11Ly}~]1n!ƯR9G BŖv)p-3WoHoIFK?>_2HFRGur;ֱx):O>dX듰^[3^~ˍ) 7͒.X9#IOT$57hͣlZdsĻ'ݹTBK Yi}hB2 ‡PY+CX4xMMNdBb2P?y;Ȅ)"̊eJhĮ, pcmݓ'C7ю(Ōit:B`{&uHz7{׬}%$~]?]r҃3ZK62kܗ}?mٻ;8~~Q.o9hY0oPȵ{px {tm;(n0EJ \ TU kjv6>4x_Lf dԵWͅ xBPVTB*%PA +TUkW6E@{tї\Qi(P%wJB: &0&y)W"8ϪB UzeEH.R Ѫ$C֡ Aw[{G:p_sŮ;&P4oJbw~ ̻=\Ρ@VXT%ddCpRdq@cPBOʖ ݐ)\b˒%=OcƢ|SױeK@D[E ʒZoC3k %&T%Q䯲N֚!|g - de H` S˥B=2)|R m΍apI#]ȵhEJ*~'ݶu+E{ 3| K|ÏN/2RCRn U$j BH`Fqr_(,U, 3D*]ᄠT[=H#%(IVh|Q 8x^. d5555K#A 2n˴mIKDk___O{eoD.m & uk2JV%1vU 9= ` =q[ oJ%~{݌tTj~asПBYgDz'moU6QnOON?SQ+Q|路nS39Z(ʫzypC#SCJOCcP<`7Ey95o֝d@8]]]I҇!zgb܃9^U \Fm^8C fW ծd3$H]5WPM' ǫ$2>G]ݦ?s|a'QL] qƙCL1pjq&qU}~]-">!9t:u cZ3ur,"! ~rof0_Sހ2SQ?4x*hR=W󘋇z8Eޡb"8>]廻y9BI0]4HYR唍Fc [sv0kv= BjE?YMy3Q/6-xcfs RhԚ(/cW9='a-hC(.^D 8T(ZA2 :WHP>6{eQdH.x & lqނ0+r,K^(W"vf2U8YOсƇEBP$P !Ƈ+Kjqw{e[ݵ1v&bMvgS. 7E*o۞9EaL{G)*ձ.ӽAm@Qh Ttr.^ǴWu& wFEJngZ#^Q577cGm9KJX:`ʃr}KF0}=nZ-WJś{_.xŤ76ȁ%d^KpE(Ac+fgKPcVq z錕/f̅2\lJ0^Wmf^"uV0Fcs)*Mb~u|0Z[QI4(vDsC,FD1X}?< 9;8fVp=[Q}I $BN.}?¤poši2Y#w߮gXYƎ_ 7%tQ,X!swu{=d- Gs˿'muNUBy59rt$irqBZ&iqu}@1@$/s *dq?}$_\g 8?X->{-O v&)NY+.SLD$Bs+̛Rcݓdļd/čsD)FkSd^,x`0RE3T !jGf;ͨN}yeAXOEg06klE;8oӫ!q:%0CqǴ/3D4 b Nw<&C8g Gd6nWdd MXfXYt]^%#oxȃ=svpT#XHn#wNɛGn Lu,׶lGQ78 W~ʡq0>X_pߖ/9+x{KmɁSQKtNZe~Vʒt `tyt`߹y~,ҥzI.^g|Ex4u&%H4/^gxntVgo*o`^H?b4 7뼝jek+F;?oțAjAPGl#&L94tvO DIxC8L7iWlkS8QPN8@L(?AclmRq:9 pExOt% &J}?T'?ѰeƓ8Fqz_ɿ %(kW4Φx5Sv>+E|uVu F̊D0RՓsX"K&1硙 Ğа̄p܇а`afGDžpD <>2ƫHxE:ъ!HQO[A?} T>_~5{miŵ:e dsqꊋEk}`^u9˕H(X%%$ӦH /A ? u \ *6jm⏸!$;THcyB|J&@AUp,zTmLCɹU" x\a.3 W?␲9IW(1 ) 5\m5ic*E2*yT :@ !9bC$ʖqT{٩j.$}~;UpL1SFK1wTEꝪ[F.$W> G]3 2M>SbBJ1v?cjr8Ӽ?wtzĹ 2߇mȉhpvwxMthncipyPj; iV -%j9KSfAp̬ ڦ*5wӐ:*-*4PQTR:.m%33x9DDߚcoG6"MdlCȌN Xte|#d%\ۣFۗn|a-JV20=YJq^`zk\ƒг>}CxKa匉3j3ߗfa<ˎpnADƊ,%՗3gTԓc5C=5Kܖp@%1SO>tr9zeFM׬}[Z+ͪ` YyYHF¬H#I %!(^ R: Xz͂;" g:Ԃ;OGPJ+AE 2]%)1w,!HJ0rżMHS#l3 sp)0XjGH9FHACm'@Cp wvI0 bT02!44 }zADA VAj%' 3(B"j'9WA1Iw).0p%ޠw?܁°X9*U#YRWv=&Z\gHCbUMS8z<3v?JET"@ROw*={05#Zybn[J2x 0 8d[ .[I%q۸@$9 SZ0Rl`+P񰲍/*=u)*_] k\ qJi*9O>CwvVZ5eyy<2h !E-lY@l-l`3BuZv)d- ,RvTXCRe^/ [kنJYkr*V nBY`9?ݨ Ch"+pN^"Cur&J9Pd""_CCm XI흙X:DP.2Tp y%ҘwDn*f uf#V=  4x#e8At,]x\=f~d-oF\_5~SŹvT*qzмhC!*޲YLER-ytФcbe`-&h]HDpvۭwۿt@C;/Z/~ask?sd:ËmW^ ?m6..8`~k\ez[OL`I8 γ;P4_]38wƯX=!<"T4ŝͼIԍi3ىxi dAA!wx}85?Keh0~9oa=}ό?`ߓaap^X?8S z}s1&WA'> -ONB`8܋f|A3p}ٳ ɏM[16կ}5/P#o;i粷yQ×{>(kWt۱V󃘤&{3{z(?=;8ͤpx5: \1f|50.̾f mo=_)^&^]'dǹ'pגNhEv:{=͝mT> C4QQv^.6==ַzнA `Qc[~~g iws'X^w}jgopƮt}l`{Lgl?oWi?ey Np_ڞpv˟~0xnM&wȼ~_cluˆNV-/bf0:e̡a .8"tⓓ2B&5&9v7ˉ=lں%si9s#$9;è lo˸2ZT Y HJπc31rw]Dpzc L \%ޯM Z߯32Xv|ן;[D)~J*djxz\ GHJTP, n)&Hc1)au հ5au w+.#eāwNӛbBi"Ea!Z"5>.gNfRL AoGc;ƅ#|+Z"F=4f5.N4 mpwv)ˁ&F[;t~H/vm,(d7 \zgc#ƅތFZ+m7zc鹃 KN|kP%@-y*@3M8H)V k"ON{쩧G6_A͂2԰dŰd9"([%SaoʋXjp+"Re|w W\^|FXJ,WЇDzd%`[#n`Tu&4U y% L+BKD<XYT'~sJf׉:['~oa-/ZݞUq 8 0Pb3 Z !L?\c[e'ߧxj}K^$Y-v[z{ڔR[].S[AS''w,_GN B"9G5]>5?u0`/dckт%ZptUw7c*$s7İifq ѷ7:KFϚjʄ f!J¡NtD1.) hՌ?_ޕ;}Ffs᝛{-0|3 vUO?7sI;?fdbս(o6܀r(K1eڋv6|ʭKS.usUTFg@d>р]U֚[y jX癅uzlnVpvQMNB>|W:P>f< v}x'W;ҟSh,ʑ\|C ko'ʃmLO [[''G+;QDK"G5v2C 8]ഽ>p 6'%%Ч_.~Hdc>.7f^y5JF3<ʓj|t_h?W?ښ$;h0g)(K"r!Q(J?_V1hy)h$rMq|0mI9:5@T1(G>ڇ]"Cݫrd P8x); ޛ9TF)k,,ٍN %% vM]h/1=ݝ^Om,~f%paiI9[ I0TŠVɴTd%D|uz$ ǛԘw&bep4+tp>ZbPn]-Ւu?hl1̛tb]!jKfYAYdsdhM%kjj&jz%Nww¦S;' 9 sBP;' 뜰D)mkڡBOugauS)nɅϡdIzslOYR% 2U,F pgc^bUl(P3OUc/}UV[neu:jE>ˎj\9T p"%e (} @.(XK9#rhR;/vϯ 4m2JQ -*wߘqXr4 fAlB^zn'o0UVO׿ *p]mWFo%@ɄsBDk!rƅMiۇ5ZiK۾k.P_uZ}vQ9!f3aO`BHOgYE*=c xAKx=)p .'ݍM *hIɎ/MUVA[mk3e<+|hkF T!S$%ZKL*5m-{iR@Оvɰ/@n?.m[ֱ&ŴKlBZXPTsjM^hRti`: e[łnޔqvq궪۪nvpfB" K/`$UD$*CVuޞ%3{  ܳtg\aÓշFU!Akjjv@jb ܒie{k8N3L0`TԮٻT6 v+\* aI]S]Si|Y4Wm=y',Ƌw$؜e\sgz9F'A&. gynILVYu9,71qojL:®bB)ƬS1>3|c%1C=f 3a,"U(qϋ( -at5>:- !<G^fpq_h03w_.ݯ.ˌo19ğǯGYGܛB 4.NW6%s!=J: "(b$xL`=%,7t&YȣCNd.~`kI˜DnB>KҧJJ}jaJ*\e Qg%"δ^JH@1"pR1>LQA^v{vsv?D:L˓Rx|1k xD 8ymQJlHimYEL7Qvip^zաՆf|OC4/}a)~F}oNK?m}'4~n.W4&3lH3RPjE7vû e{Y??\h32꺥tuۺ f aJx]i OR1J/7Nqtyˣ[Z,_<|&%EYd##1_K,0V(˜|cq@1=+땶Fs:}U7Bvp  c(^Mw_r]\~׾}9I'iu,ˍEd!i¥R|v\jV*9pW;_(PÉd%oʍ[5b YUc|`|ZڸÓq!O޵u#׿"S,h_vm|3ӖגmI+s3#K=51}w FP?]uϭ:X$ȅ!Oߑnqq G~?[v=īg?nOgV3JQͱy]Gl>$g,jŞC6/\c IՔwx…}Ӕ޵=M~~ GK r3-ؖf [CZmDrkl,pq5j%bl!/5\ |0c}7w7Pp_'/.O4ϜkMVu㒔fKڌmAX.օ\CSܜ1s"_vWpnv3~JF^_u{ܮ9ʯ o tgo\|ǯ?s(΃R?{7lsj񷋠0sj/3.LNw ş8tf=2w?{וּF !W/|{ys Lr>ۇq<݅as㝮ѱ%rV zA=w-%_>CE`bFk|#hP);.#V SGMek]KͭwYQD/O D) IT:|2\oWD c׏h. Z/?b>y?)EI4Q/u) V#UdJ] zr!ܵBF@KGf,dї:/T߂ pWLK{{}ls_5nTsPnb&eF]?*U(D (DH7.0#97k#Dm grV0>D\;7f*P8!S,mS1MJqJF8/ ejt:lm,=X_ K0c̚|g2L[5vw2o/ }Xmi"jUxl ;y7{*!=8wȅ 56L.4ZI=E(Ѓ@f`fiPP6!"uZ)C99˻>06zM.Qi+S r!KaYPF/47HIbcB+U(EvK[i'KklP8 {:߃KsHV^!bs2?9 eVj6)喦E-R3Sԑvɢ -*؆Ѩ ѵınHs\WLϠġ. eNlʫ-n6-Ϯ4Kv]V7m4ѻk7/y|TF +hTjtQ tvg\;>A  wK#>\ ,(0oyL8ʻf<(ƹO4$f&pV#:Wш.̠ H=CrULnU48*aB?aE` 9~݄U.NX :$|Ç7-A] T<;_1} ,7&yG&KO?$; ]^WTb{i:AtF3B:te!Azt޹wL+s)~4ϘzN9Gw>JɔԽɓh( b6E9nGKS],,ښIYgɎ37t( (b/.vpBu{Ő^_)oqvR'(fhU~=q+-e,TUhkb4D3uQ1PV sfU*Qp՘tA)׊$.[ 5'jOo\-mz+aI~瀩%raU1+,ϥ>Q^F{bħ#S߽wg`يHE1ܔ /e?f<>~6*^̕Xģ9J1?FG[E@lAP֖X7 B?J.lJ`9˓Q K-y6+.Z~@Ҫ` ]NR]0bltm;;(#maɣ.mՐٖG:}qk2/_񤍸xNg|v={=6sX`D`9rf~1 H؎CV\ |2jěH:6Ys1TkZZT3jѬw>Ddd0.sbwc07y⹵8ڵ mlo)]qׂ{;np|$eDDQgyܮ@_c,{V"dSP w|c3dU+첖Qv\cZ>=we. 󥾅lrAk,[^avt``4aZw8I _>u{*_q~ o1r鉂INXFK9$*e0]"nj:~bp6,M SnA&q@E'lMǩ^B-f0ib}|>ᔃܬ4qVƒZ {@&oJ4&Z遲3Y=W[˽ϒ91+ڛVƪOHU @jsw[BR&1ʭ3A ܁UƓ M4ҥƱDwg"m"V9hl[ DAk-ٝ,cㅎh.p*N.4p聾[z ۻo]0DOqjUq ^/|_H-2Tl6jؘ p HHMNh-jh%$R&P.ɺ-|γ_y2δ*/q!~`vހ/ R; u~؋H>%vRRv&;hm6"| #J" w$ 1Mv@1$مd ,G5𼰢Π6\4@. ~bv1q񝟃_~>DAZ/W72֟'x;_8w0aqbre WM^_.m(^\#zCcyp< m<cVd<#ʁ9o.(y!e. MZP|Pnƽ1P ƺ8[L R-}k$2A8Dғi(5xmE"szlMAߖG@9 `tn( Ax8#σ7R;@` fLy769ϔ_w a[L-c=5x<aiX4b~Oq# |Q,Z 05X ~_%%VO1s)`!:,|{泑;]+ aifh;kEumc<ϣ˩/VLYoOj毱jQ3DD8ykr! NQU249Rq4|.rj RUŐB~?7( c",Ԡ^{0JY 9d9% JǤyEkRB)B4y=CXJpoԆIAiDrgsW/guáZ{v/88#b7j t,V]wPܝ3b@D\Nңt#zX0^.PI5S'G T(pv@:FjF zwT0l5ȗXupw8"Esp#wf{,|,& ~eUj12Ӊ?CYYp̺pSv9xs30goQ.ō,J>E7{ *>VHCyz$5(|-OnA҃VHFO6S%>@p_ϟO$jB* Hp$Y6Ukp+}!,|9fq;n bfu5_\\~4(\~2&L\~ JO]> !i'fd}<?g~ŧ2}yZfonJ 479ء7k9˖ G=xw7U$07o`eqhG&Lk͒hPx wvYqU:Pv 'p d67hĐ-ҥ_Á H\2Cx/cue<'ogp;lz3vb4op濍 X{t7md6\.D0ޟlx q:>Gm8k2;1oTfǙRJX|>&+qߔOq߾ m^M2v3}Ӈ*TB'b- 'Bb󷷻gAX,NRID 8 & _{)x=“)5JyB#QK8y1،Ia nã"ō!WBʽb9\ƍGBs/ "`1Iγ]m;xUŢ{&bZwX5vJYCU\6rb(NځRҭY<}qB1RԮHJ")8T[oC4GRjTg^E j!iigWiGwc,6:Vlؼ/$6}왿Y#+o, hwkڵ&*ߦXWZj7lk4JH)MԱ 5@!~ϞrKn:*#xin^}07QP%BVEӀ8:6(:vLAwt唇re9e,)N-,R9eRM1"T7R,@6!9^>&\brl3;7LSm_:˳=x"\YłHaq"5"|̙>C5S|̙."0 gh9&_=0bBVqk7~j=.]Y/e8L!b(rt+{JSbu1_Wk@ETfZ?BBLfb c{ػE-Ŗ/43(gqM1Qp%`gXKDV7sR>0 |XX~$Zh0$&nXGޝ/%X&ؿC1n'-XND[~gm]^'9PƎ0Ӄ' ~G+$ߨ7Z ıƾ>Bb_:H@l,"';u[ΰExP4X#Rm_))(%YV?\78vQᢽ@ K2Ae`1&Bfr3BL`%||>ٕ@oiFksQc 4J߿LujK礍Ze–+YeHL;u gJvw֔ЎbUtSCt֐ǢT/5Ε UMV(&,Mae:W* C r*c l.~pf)\NSZ2KbZ(Hu2n)YI}rlS 1yxޱJTa+\/Lz-b}CX;>Drŕ&:xE&X\kjK׃j PY&nEend\3M=U2HO9>ӜBub-CD1X1{h=:%x9hz{Ag dރaVMhi J!Ag#V 'j1ja)v 2oi47Py\$T B{ t R̡ ^xН Ik^~Owp Lj(_O8}Zt$}zkQTCq.,~;i iZfQ 5##CS%k zǦFM8%1J|փ*M_],n ]d_C$is*ݎ]X:C`x f98fpuBstZZF~˻lf$̇9 [+{7Č*t!z`HzZ4 ^O"7aÛV0Q9!q, Fq ~s5]<ĊU%vs( hN$7 ak9?86{ J m]+0$9H5jArS)Ί]Tg0'~'e`P ܜ-J)؛L2D1tE^M҉ Jر-ֽ58Qa4]7qAb,hǬd[kA!IM9ki Á|ѽ"˝@q8c¨LdBkDiKIx ^J{ 4髻k&w7b3N:לDãyω!N'u&]UB,k<,Y3vOOyeђYgZdYv@8u:8._B&8.dC5$2$!M~d=) Dr<˞F +q yasF̅uRkF猧60&70x>zgzMZ*Gs#ZayYL(\9 r^D\Ot(Pkv'hD}IIt8$W$ @! rt$qKUKB6؋$j/+ZV$7hXcy/ kלCu1S#섏GSo5k~^)qWT$W|(^]^6(~S@ڬ&ck 2E"9lLMn6jPBB4.9(cT֯EmcTg8+?qY_n[<&>SINd/@X/,N/~1)S>'b}RyNIa8=kc3v4WmB^W4r5M$LC ; j.?;}JV|=ȧKbCݷ/ Ρ џ}? &žגp-= AƣѴ)4}/I g،/wʼK/WI-96Р =g㡾r2nY94E؛,hk h̋X fKW( `-$g +2RU2GHG_qZd=gsG-ΐjqBb~J&S*8 1s2zLl B i'chLCQ-~_G Z~l:P2toM1m̊3w_WgZ+55\ )hb8@2b$X dg.191)H̩ÃL61"X4s, [<_I?pc4QLnR'ο;enUDSͽA5)`ou1g%%#sn9?RFIv eWсa a[m5h||ZWL^I CPIݺx+*hP0ۍoqVY2Li!rL܂f4yYr}wxXЎ{!%u|;^{V[gN(I_W M'8E}^Hm5q0 u:?GbuLe/N:y,N02%yX;8<1I๔6v٩Z]w7[6_+ʹ/ΤZc˩̅N9I a0`V(‰ȌRf73|sa̷UcLscb(ϙ~)A3ƋF[FDq*EV;2tʘӘ8$0έ&C#1U$D)~Z#} {-A0r._rXl Z`X>tRGfX/ rKhՀYSAp|$B[V4nu{/1Fn9Ua|F?L!ȱn[;+b)g:ïekH-]pm)o$f%m)o#K[R忖;6u2Y*bbiۼiN%F* 'g4IF0$M^7^Y˔m t,.X9> #GBpėm-&6'KITֵ~rO7:8^T^ o1M-l,NQխDZĀv ,Ny@&(WiŒP'ӆ1⪱-Ύ<"=1ZUbEYl ӑDSb$b)dmOȔ$>D8-Q;D7 /lϐ "pêFh{^3b[(>\qˆB " O5[8{.ak ʎ`pEO1*YFB`D< pH W;?QNBZas^{g{Ed-.FTRB@{Fz'I,fi4pRi̸%ʦ4sQ1TT<B5pf},Thvjd8>Ƨ210vz聶;G-gjM'j*̨Y'R-+io}7Ε[=1t? _ #I81hjixQ@RJ.:(Ø_{Lh[d,~ ԛ~Q!G2dO)"|Jn,"^`ǃQb\$aFY"WZ!U9S,2aܿSJuVOq&,b@^.c7]ZT٧fnG.zݺgfY}_YuV`z;{+B%nР접Lq@4TʔWGC FBXK&;'F\,Fw@LWD 4IM>Т+ Đ$ $:UQ \)͌e׍A!. f"DǪ ҝ}Fj^3E|d4h Iڌ8$[bϟH>q+ƅ %kkUC)IXΗQQL.j|*)F5-Ltf9.5^n=0U\㤼&{Vq EoSc-@q3V`sH(i”19MHF[6\^am,m$>]6w**Z|ipw5c'`\m͠'Td[)B-ܯ{m΢w"aa_W 咊NKFM&:eG!5RvQCpcľZߍ&9r='v8I^݄b\[ u 4LRGpdoKFLX@P,#Id)D!aJމO;Av]8Ne[WvbִO8AH7\PyLID-NE" "HєjxuvBp2ұ`E')GS4_s pl/T|:d UWuG@DV-ǨCDoʆCB>'FX[5XcxZwx"l )06W 7@āH-ld+M]č.e'z~\;!ޫE33,6_~/?eE^v~@.}wP2oyͤӹ)n w|80狺nr^]s"\Yi,$-ь1 )7a#"ДhL`qV"=}՛MF0d>lC4nљ0ݘ箎K9S-f)t+Ôx#$f0$kY^}in6 DNuH$5oL9I}ީFY 9΁RNvƚ`Ԛ4@ ],ABLiߩ`?s~k_\ϛ>PM 0V DMm2f狟 FHAA۽R@QS: CSW3|dH!DsNjtXqwWݢFH1E Z 8[ek;*Le,aD&9,R2 :yVvp;K nrG ٲךWa¬_6t\j8`9o5"hσ7WD~u4$A-mǾ"*B,&]JV?#iͻ2 7^}=HR/Ti&wt!.lVO@Wc G_fA@n KiR _}&NnSiBerb%B9uZ `Z+~jS?{ܶ_Cz۱$:g'I=qNx@t#K(%usXBGz"̭<#UFA:TC\ gV<"|R=w u*K%h; DL bxH2 VD$+gEPSF!?(L24_0Жjtf5C^hd9P@)&ͤ|\DhԿfvp`Ѷ1:b( UY;Nᅈq+G CZKL<6|㣷K@]V&?H 8u:S\;lwy;_oF.-(B$ẘ\G DA'iF'x\(2|e`68]X))ߣs G3l]mko3n>ٜ' Hv2s@X#='h0q:/O^O9HRɆ0'Γ"N( ̂$v,Wʣ@ ߾ Ps>9J,s ׿ Ym5P;52WqYۻ@E͂Vz8_M; BK[`΢u1_-"08QDHa_TɳNkLUFg_-Z ܈#;̛?73E{<< pRG(߿ܾ8ɛ7Q{/[e{8}v·׶?j& jKrv_.윿rx/{POY i_\F`bӾ  ãbI8 3׳qJGqU33jgfjQ53Mx} 36o>B9X &@[⮽J܂g8+)s`c8xܢv K5-[xI}34FK\3F 4N@a*ςKߌ iI;v8n?P |],^m{}@=_Q|Pp|I BsAQ78>9yc9?w^Oblq F!G ҟ{7ްCOǐ x2q>Mz[)CxWިm$PuO,L0hۻC ?-rx>Oa2o^^BqB@{[A0uԈSR_Σ4&y=[^(fY/@q1e2o?|3!qJM)kLYMs֨,#$ʠr^3م! f:.-Lr![~ hS>ܪ9L2fiXwFB;}!4?gCw;vnIиDX/P֘*< , ^%b-tD߽{g14h ޼Bxmf4n / TɐV5?:J=s݆hJtfNje˵;]ȵ`V }7d儮UdˇQ34SBl|,V}} r |hwsw`/-o}x:)9{s>+yyqlC0:]kHD5ρ'/OhY.b}޳|azsu{y;<|&? !h8-8vύ1Dmi:, Z0) XpvT?5Mamwq瓹ʿB߰xgu7EOzi$:>:rU+¯܁MkTkFh [#huӱ{T)ehE:6| ^>/O˧SA_O˧SrMg#m 1iTcFʚfޠrhF0Ń.SG xrT_&3w)jB"{ L{~SA#mwnt֢Z%VNߩl⎬L6@xϤp͒Olߧ''Oo߾|᯦3g_vyQHzf8;}qӜU۾4 _su &_E j#>ϢQ˯i_SoZyD1qyf ,]JлҶnq)Zε$oeY3ܵx ?: eR(---Z\Ȗ ?L4AۖZ*+tfhn6W)aJY).i|`bǮl.wl/@ژ$ 7F|1inSs`33PWsPB 2$WCh7l w`bΎla[C, O~3-E8 >\e6jImXz[,>~ppkglpL 3fDU@4,:kH83?q~tVeܛ]Y3ȓ 9T4YmMt7"s&vn<'>qqmhOq_='}% .SzNRj"ȶ7N6b{<ivLxo=M]ш:ـnr'?=IݐGH!zxB=U$vE.9Yvm㷅Ahhj:٭`Of6 zker5e=$ΑC睶Z:ӎ_ Dǜ}8"pEY.v4)W(RԦH^#k<*SΤXJP5 Hh=MYOvוr5;+b>atmu ܁+8# ĘE۶qzULbSг2h,{a4<G6 `:qSIWAܹ ׈ \ʁFy5kZsWG Y8Sd (ot|`%]H3ݿӔkkM DQc̊pwZ#aL732 xG+[.}sW} )}|Ӫz Dr.0ף.)Da롺m`~=qbi$5IEz% ]Y &0dѻQ)e\EC|yb-p}tkiՇD\bҽ|!jk/nb8;5~A1Q|[$H!0\ESwu4hq:mAyTb{UZk0~_쾞$xų5?8LV^]LlE9.oFDϬP#|,tV|>NPwG}.dZv^]z57+eo(蝵Z2׺wwي;HJHzP=7Qd{y}F↘`1bxf_HN.\@Xj'[.= k[ndKOfuʄ9֯7B& X&Q BEDdɌ4s3V[_M0(4_Egdqr4dVT:/q!p+K0GJ9*d#Q2ǨsCa-D s8b8F9p$h++I}|wWaB~lUHP% mGSD-},4KI:g<9؊Q-l6R֑$E1eA@)p81/qI dmA:k!,0 Wп`#-u0,p ](Zx|K&8o1Ñz3Lg8aJFpKjE g*oE8dOg ˚y^ L{18J(qfo`j-[mݯ䭅ajՃO|;.v s*(nyi*c%ΏSA:Mpe4fiIb E*(I!ncMvZҾk_{V 1|{$"覊kd/45}$59B0=D'g*@fڨ C4ˆ Eb q4CwI")JUΠd5[eh)%R 1yKQKT~Ͻ~^^Ȗ8}!`~b:e$Za)#HI]E|E?OaEB-+m H]Sx[4_xчp {~ؿ364|&N|G'g ny=[u~myf3ԅWas ԁ>U*]'akmǒ<('9 cQb $!2Aj'Jcu]ǥ Ը%4(cXwĩR=SR;gPėIEE6GP&ΚD2bN0 L ւ ts?O3q0%.Bc]P~:TDgJPo;hv*?+|W}qo6QYkLW3=Q)0+*D A|.Q}(ea7i`-vʘcU! ބ)sK@Gh3`S}r*]DSZAh M޹UpƇjɐVjU` rHo-MFJD*6A mE,A$J(Jba:p-%T!>˱Sm0FL6*(Y֢aN#Z>tHs܅cT9RŢƺwk*Rr}K..=C`β~ ɟ/@gϘjP'>Hx)(h(oJdaE.C/^w"HlED4,]€䵹\C@<.?>zG/Uh|uuL4x4%͟=|~YɼaԝweI[t~Lv'.0ly|pODelП}],\!\`Tl@^@fT!5MhhvV8FZna@d\+F`A/0fK){--=ͤ;N.ǓQQ5 Y\zyzBn ?uA'>; Yq EK_"JΦ | 2]QDjl٢ⱷ ߟs~?OzFFpU&Jx/ӥ=4:S:ţAWYzgO73v &-^aO5~ѿ߽}~ٮOn\ A9n^Ljv ko6rY_y]!N\٬Iz2Yo}ޯ]|5?&KLn6 i+lOyL+u9U'OBj)(JY3 ͸nqCZžPzAW!}]v:.@QkzZ+s kgxu/1/6췿} ŇǙz>l]߲wR2{/ *F`4{3?k+~IA!8f 7~l2p7in~L o/ ܻ >)w[73$^d?%'&Dil-FP{1wβ/ Ogw% #WA?{g*w>Ƀ+7q|u8Ч{z<~=jFKEx8ۜz}5|<?(|9ž?x0}yųilyg>gOq,P8a2<-ƣHj?O5D#E+0Hx~&CDwƆthC~_:N=weW\ٳȯcB; zmo #Hz:cZ2%JLKAd;ٜ[Xq{qafŊ ozQ]m 1Ɂl Y~1M>x#ӝ(IKo<}nN! }y5so.nw(&p[y.av$ly5_o䴓?~\A""E_U- !m.xI)JtdR\[Wjrp}'cG]s;$aX.&LmAijږh]v476>07Rq, RIBA&vc;dmCT b6v`>xX^Ïv@VlF2|1"tW]7=&./3r+agwoBtV^n; AVq6.w*8;˽[d rЕV9К[0P$wYk,".1}dӐ8&7ڙzĭ{aq޺eXBAC✌Ç#,UiSycZ<5YtFMp#k79θ&YɽgEf.Df B\I]RD8Q;B8&\q44Ēh&RY,4 ,znCj)8kwQp&8MK)Q^<ۖ&U n!I}/kTH*jwg7*#"3ň AvªU/rWrI<fQ#Zq"_$̙,⒠K*g783xy s3MK`y3)X*Ɯy^0|!5H8 =HB0U|8!\)FjR~'g|ثqg4_d=ENҟTPΦӷgū:Arw2X.'`ݒpd '&lc JZIJiLF GNmOEKG1t&p` q!&֫uAT.wϣG™QCZ -v1\ӛ N2nAwne%";dw'IE0B(= GqHFt]rKIg#ʟPn lXH}Y6 ) +7&2M(M;㼀Z޽;*#$p.HT3VGqHG|)4jcys:LՎ|.)UpN7,w[Erzj9^Mrme*LXmY(LHl"EX,"ő`G3k6>9Q ųeYdi~-;v[AYW [6eBžnĴ^.Bu`i>B2^[gY<M"_C1bҦFXJ I2hoKbP{94ra) @Hp%bh98(@](ھ,^ Vˉ?+ ߮lCmW|3°c+f <^İH_fn%̐J4U.M<% FzI! 7ђვ^ׂuU Q|s9f|)=\)Dc1SRc@+Dh=fĻiF9ëն]*T)Z Slt28)r4bR=\3rx7=Gh3  ndJY(eE0JpR FS:l­1aB,EBY|"pn!.Nik W)~EiXU /vp48m7u1]5Q#8#6R%[(K/J !W*qH[^汦Iz` ADZ-.wκrJC %E BIL t/Pd51RDŽ"&R^ yp`[W,D&P5v6$| N. 5aZgX_rN *?$ƕ: C!VX{`(.3ˀYKD {/0 v/G@dV2#m= ܂F|P‡ wC> %Ԕ:iR624B }ZY΄ղg*kv! ϴ!-P:{s!l”a]50T4Dxw іUUtsP~˯9 @Yo޽۝{8Ie$>O}-q=U\)^c\A;<Э Qljv \ ]Pwa}N-7j>lth\wmUi]\%njj}\2.뒨2QqH:#8H߆\%n5{LSZh$r{BVYC̾(aJv)gk0R3̄W[|"*+k =WHnHg,U1(U-C ޚL4}2upx)&v|qyzYA/6Eʑ4#°^תlTӏЯu_\+lrݛ}uψ5𣚍o$x6y3/JHʔqf޽چIS \XAd64z'f nyDzp zBMH ` eeǶkNl<;TC}jG jS N|;!KjʁEV~,*Hv$˻X(hGw~;]/ h X%: IǢs'H͔ VnwbŸ =nzK, S$I8e$P[T`ELjGHO1 \/ *\"nu1若|x`x_~X}aMe註 r}z,V @,7gQ=(f W1> 8m$- e RlLa.{h$%\W R4(ՎlQ2T}X V!L;48J 1xGefVx:2DU,p\ #8/Ƀb9#V͎anx&[n22nerdRNss8ɂld}WY{]']еŭ8ݔs7/).N/T ^1#Q6`aKBf޵'cΟ ,{ʗ+N>EeWr760b2Fcryߙ}gYw^<@Ip9 \ӣ!ܣU ]Zhh] j^'׾;=]hp(M-/ᵻ9`Vv6y{r"-@qI%"mT!g!jۢb]bJK. TqB CDWgIDb`@83CEDHO)b~dcoCC5Gg!ȉW o#v<ՈCt3]VeLdgXyn6EMݯܳJ0s]D )X$;,Q{ NMΝtJj1PUɤP[<+0 Ѭ*EhaI[G6ffd$iwA~iJ_ӯqR0b@%4 46ōIdHG-YU'.o0@ڪC[p`!>H,L[2eESm;hAA!5ǿ(>aJXt Hn@B5xi64{2-,h>R't2ffbM;@E^k^p“h^k.U}#@ ;{Mj,p3_ًO7n2^Bt%º 7^[E .+!9h:4پl7lk?4-P&a^L&GĝIB΋g6n!1I^H(ZnؔbD$8DžwTc=R֙SR[Dhx"kS `t^-(hs|MsI廃"uV_fܽ*kޱbhJV-V{N>?88$6 F 8O5B% qRFMTAI!CrMwa1McA|l3p}wSǟ}=iƣ7-@h:5??}`#rixKLD:_qj P'yvg1jb`1jJ461zc#?M1!fe:<\a"uM/20Ϛn]ᓙ%冡+d[?lޙB)$W%XyAW1ltUOz%<_:YhϢ|2Ws ʢGSEUG~AՖu# [n2pQwnL򠺭IuBC^֍Y Zn2pQwnHkL9lκpnUh+WQ/bg([֍B W.2֭s1)"1ͨuBC^zlESM-ܟg4U*5T x+] SKF"Qd_+1N\S-(>Nt%kH0ǠJWDŽ^uG }PJ^ PgG}*$<8A }PJ Jj,UB-{FAXٌ>޻id cVV[9"׏;CPai=s.$swZҜVc+[r\3MsÞӋ*v9*v6)hҾlmu_ȜitF(>nΗ HjC'$fqz,aDb":l^FN{*rckxcr vCf`ξsY}ǼFbkWֵKZej[fRbc|Ӯʪ!bka_~x rÍ yQF76)ZD&(DBRA$ KD(# Ý _u D#C8kv9Vrz@jwR oY r6HbEz0 Hs:oO?gv 3 rq8aK7AĭN}8$<#y('6G8IG)'G7֪óA 8/Hl59b1^lPQ^ Uaz RE$ZPԁ2oǂ%`u->6jƎY냲Ox}~X}UccE: 8v0`'yp@qԟ?p,}?> bqO7Lިtwސ67n36[xzF֟8;Vl5b◔&xK@ }<.Zߪ Bv2{RHp tY\r.,x> +%Ϗ?xz0Got ͗tj:KY?=}@ધ tEޣx0-:.\~ gQt4p9ͻfiS3'gRr&$ ҈pJ1(D8Aa,$ 9}\X}ƠBU{p!STZ1cpF\ 3!{?~vl3[xSќa~ndu\3-.gIYG2@A(` VD!H2"OXk S%A!r>6=Sn=͝s\n rg`>XcnV7fa%h2gsMD/>2 `½l \Zɝ ) !jJHc^KR `H@BBMKkq$2ܥٳ%d:4uCbIz3$-$>K='^rN Ž#S>M:Ok1sOJADkR6 =8)Pp#Ӳ{m;fERd-|W7o?3yҩRhnU^WOL9u fB6t{;8:{.Q]ּ?ڐ4t"7\L+c k3%,!BUxm_qiWN6|,#p}֓N/lt1@)fxQFpXs+%!FV?vAFPҾC/dl>]jG"X OBIҘ<,7q4>0ͭ1)yBA(N S(~.ȻbeFyPo`2~nbsޚC:^P D@ $ r,"oq$tyk Y+izxF.؊GeO:Vac9i at r㗋01ƣX "U|eޝ3xw&[NJ}&jΤ^EB^ 3`9r9šju]yS>-QW(Y\!,!d F_ "wdNB( $T HFCY`ˆ q"2ûy1;ݍ/.` qq2X>OrڧvcW#rt8ySHfY l<{x\w"p v@X 'N&xٟBON%$Noۅ)0nn\D-N$GZ#Np\)&DE1A</4&RІJGB a+hF0pX&0tH(El'F[% K{zYja'à RmT\zBJu;f lDͣF2 ؈cj bJp^h%D1V[n-q+7GEX?v|s4U}i!>v,md StuHo| W؛ǭQGl%]]'Ե\N9*:U"bz4B`Z@:xgzySq<4/o\,bO(/VC;_h5NeE6L'揠3d$&y$7.>M]R/!u$TJf*~iǩݪJkϽ[zB\\9EeZ"ʽZ!B %v'dwk!=J˯*76{>{LA9yޓ<&P2?F땂Cw+EUGaJ[/WZ&k]R^T`DFGoAW+Jp!Q0Q[,]ذFFP Xq2 P$aH1bZsЫ9{\u%r0vǦ+0dMu_GuTBG3 ;wjz_-ߎkSݙ9|mG5nn_S~N @|qeszw:y;8tfAirٽ!^VuQ6..ͭhvתA Y|< I0Υꉩ]R E.USpRT .r:ìھ+=UW8+2"E4xZШ`F+HSa'㧰Wْ`Bux$h{B{|S?ɔU=Rm[tZ%{x"u`~sl0^x 7FՖHE&V;*{z)EmȌ_ "PD*Kሳ5 W$#&ucDHR`/liQg.Q*N~"&8ꋴWz7ej|G.Y/E`J bWנD4g*@xQ*ter\#nIϼp= 3/= ,HwKcb$3B qzY,$-77dՂ*qCV-|xқb¶U(/ʬٱ'*BzK f/mcv>׋Lz^aɈ`150&܈ Bf0RhI##V"LIXEUUZ5=㺈dcZ{x( y(-\~w(#UK")A(C9rL1Da>CYq _2߉H}rğbS *Acx9,(Fn* !ᬼTU~e k%|E&zO#Vs%Bt 5\#kDl]ars*$W^AXNB`IBiR'UФRIrO$\ĖMYFZ0Gڛ3Ov!.ƒH8HPQ42hר?WCXJrV/>GuB[.jii̕^r|r0Tl)kE+EaeWuVz)ŧG/OR֕HjQπi8Eq]ɗr([ gUi(m/Ͽ!Bj*71CÚ\N]\*a!z~4:Bs<7H!&>-\9]d[(3_*dT]'IczRFX\P dzt+GIwF2QՆ[T4”cRWFk`j"&cB7E3 ,eH$8{9 ۳>[!P$g[)u2MѰ6Zp,d$13ݩp.JpF9ԜO@lt4':tOCXbe w~ ͛$>}i^׽D7ms",O!~ SV7썸6TԊN+*3IFUMNEIa!5(-s&+%t"&}ˁyx 8"Lօ%[1l~I؁IJKY #M܅"o"٬b0Oh L$C+#x$1U+s]Drst$D9W (&MRyi'g7da2 8^632ƌcgЦb ]bi< Nc8JUF%]sx#qy'KXyh3'n%ͫ`; =Yfz#j*+T & CUd m>+>Wj>0&F4pQ.r3Q2&j~1%SHAɯWZܸK}Wu$?v`$y N07,;Q85m&pVF*UE| -;++9$z+Fƚ#ⅣWkh{UczUq\e YltL0%Q,BibUH-5FpeL8<2H *TʈN8nY@ q@B;5~ĭfv\IJA?C6A7.TiQ\C;+*&MNӄy*~.e eKxapʻ٣0{_þivKp 2R.6\"m&&|~ 6ƍqsT7V>u7øٞ0_{tI7Oy2m<|w|zw 8A{7BӪ7Ww*k/'m;tB'ް:,釓J7?vOmΕ'Nd`KFz1J>Kӻ:Ar2ݸqF6˛ﬕ(*0u{8,c(jaû?[(̒n z2ݤ3Iw"8n&_@dO\igrK qߘN:S3HlZeE Ի\a-1xXR=H A!}{mOrii>Yb MF8H`to~S4 +2nvfzh?ediAxq㜧_Oa񏷑+&oF74XGڻg`$_.vUb@ӝ9~aVӓqu?t~h;sL I ooA3gL_66uVG{{U@|f+ L2f>]G] 9Gد{gMP-팤^'o;߂tud-0o4w xS歁! _o?~lߏa3m,Kkt1鎾m6^a .w[b٣N۴4 :P\fyF,9t(՟IŚ@A9!]q}\{?vφYǺ;} FƿxbnO_yx^Z 5pr[k qgQ#&7ɡ`&綌{s.km1)^rbO7Y i_k7jlC?Cw־ F5C1~FfLS)xprfp56iw29N)p/S! MA9y -"G0'FlHD`э?]kr#~%+vCt65e \lu 2/v~%rsxˆHN;|)\19xz^EiXhÈjX,(5(hn@6qdH@#`v$ *U ??>ɆiKTWJZ1M|ugI|NNlp"g"!CeR ҡ@€;CNZ!#@Z lAQBŪ/f2̠wnFkdDԣtgK]SgKxC qH"YAű 0h ف*%ѹ1A Q8JyC -TJ*jUPqRaI̽RᚎCR 7AՔe")ҫgc r4x [~W6aZ*2je,`SS&2ҭЂGa T "VHPalf?ͷa.nt}i0d* 3ʏ>Ѱ2X>Or:VndNSY&`gnvq;6n?}O;p,5T/d?w:A:|7ܾƨ]H*nH0@o7v0Mqg:rQk~-8+FE=\i!b Y{N_$tdLIS d*<%-.[WɕCXG>\/`&I})Z|2(KQi0[M,y\u kc!`+<=ą| -&te.0y{x77I#<-qkidsicpKIVNmw.n'9.Qi5G0v2pY9&mCq"OCݼxm5ެOjYs!}Y6qEגEL>wa?,>*u[ɮeY[7@ KC߯ԋaRͧ#߰'̽C}E6DR -ɖ{j1þ"NZ;K4){gE@?[iYí^u(b֙YY]` ʯKtpݔ]?x1H{4-Z=9\ *PO5XK֟]ya hϭα+ky:+(1|Bz9{&g I xBm)"2^.zq0t ΉA8+3p`GZ+.>ng]#p-o꼜g1c`L#EӜ*, ̈́r)M))3T#:SyrSLHؓjfhPw͒cpVK oDuN5è*=jPDO^BެS frqSWIҬCaQ<_x`C2Y*u/2;rr?͊`/OgK(]\Rv[̵9E:BLJs+$䅋hL)r;Ɇv#v DtbhݎhOMśڭ y"#S 87%#jX BD'v&혧Jk1{]k@ֆp] 1XKZɡQ fPvZ- =t&oЊO9zI/hh!Z'Țh09dy h+Es_SfǢTrn?L*dMWնz@o=:|Jo5c_jvݸ'_`6LdY,&HNCiVbk`r'+MU߽B"Bɨf>jՠXa&㯩E(ȣ O%ֹ@9…'&=Y:;Z?{6TڈM|WWgO^14>V5APoht=Z[wS4/k*lX%rS7ɟ`__~Z|Go mTCNT\$*P?LXBNYcZQen2z}@{9 Gݙ;,< fY*C,ߟt kӢ Hyoԁ1|›m|2Oc G?>Γ38ɪo^Xҧ dKgGZ.W \VX[CūC^Q#(aRtOGM?=ߚ>ppp+\ fL#Tig w0#idST¹jcckJ#sjcC쫮ANjz;lRTSIK}9vG5!{]N/N9Lzۀj%NݿE5V{lNƝ$d)&h \)Y#A!Q4B}[۱N# |VY2cr<~QӻK XÐ\}!V֮l rگ"yI@?ՎLwm7Bt%zs}Zm1Wp5 =23|r$Q-z16_+'Zx\!|2΢#c|RD/k<9rEh 5x$\B܎栈L]9xX/TњmSz@DFӋ:ZxB !HӹkD9%Dͦξ~͛$;wT)GaU YYA9780&#uFgn@Lq8yt#wgmv8TZHY*hJ`e왻KIhpm qDKCTڽ@#q8ՂO̷vdy~X:mGE5:f)N 4Ҝ !0υM,PeHTfV3mGPM"8ژ3ÞP8ެS5~>վeжߢzUNnlQf:W) Y>W7a߽.ĎQR't_|h6?3VW2ZRv[%PX #ek{j#W ,",lx;'_םXYYHH ni`D~ -щv; <j6$䅋hLqr&vӵ{#jX BD'v&*OEO4T!!/\D)Jݗ{q 2BvG`EOyoNhɺ ,:ZmSDZ@翪ɵv1ʢ?\VbfĶ1XVR*bo1 ^w~{I0|,~I`#cv{uuLGu5Pu5ʑk$799xg&]ׁC梛+A1McFk3jPNEnƄ Ē[7@Oh@uFP!Pfkŵ| v~!{b)/eI:|&5d!iF!WE M;dȐ&YFij(,""wHc8vrL'%$YJpje@2晃E*er le3MH(ct=aSLm9S R`dTKF0"O3\X*(u mTVUa>GvnWR}I`{d('Y6RgzF_0s֔~13lv$ @6N$+ʉAVS7Jx(lUWWwEr sJb5X(q 4ٜcJ0O qo6E.DG'!y#4 P+NiMbleO+UZ܁!\Ė#7J#uvue~6rT.)sd&c3&jyMgOw9M!l6yy3o2r%3T Dq8BûggUI@d6v\HJBHX J L'?w$sc4`~ֹhmNI6](pb 6g-lTE$h%s2*eeߠVHXmg= WWzNp}ޛa0`x܏ Xcweeb]B$ ^uѰN5\DƝW`"ih8׀&,*oX9*>A)߅H!YY | ٽ ZW],PEB"@KY6Kh2:C;k5WHP:Wn繻w˦^~7Y4ǴJPLEA:FlxcǓ'StDl~k +rmEM,IL@"hЀmHmċ'?E̝Bmj@L>YbnYHO|QrD{V6F=G=vB鱤 ĆtVʄڭ"5Z*#礷J!lyvt x6QK%X֬%XNs7(xz@9%%F-eHcK؟|d\~iaK. ܔ߲Y@feXoVvfl;͒ڗM{>[skG~59|M|~oF1NGxgz[^<ε{H6/ʍ5\ZpkhݸE-8dÔgAE%2ٌw޻~ܝ7y<7hNdUc0O#r2of-gl K]>1 E[BQ|uM%`K؊ZzhȖʗ[o_!ZJThvV%1=KO2GiA8s͜F{Q uvi zՕ*e5ʓprR/kO*,1KNb$!3qC*I!TDQyLAeG`#r#8,ұ EkTc$L+$Bu>.L=/A5DQ=PEH+}彭jȻ@*WӵR0Cp\}k@C%ڵ/ Iw%*">pٍM(żC4 |+pUI`sjR_r0'ՇCR#:[A!{%slE[fh%t> PLWRC4(Gv?GH ]R"QS1`!w|XvcrRTWN;9,0-l8aAhch%T0lY<$Μ/OOFA3E٣O >qׄ['?c1{*Hqwt#l'^G!NZ|Z`o/Ds?m䬲Z75vmV9-YSzEf\r|4Ipqki19c-]=tL׌cz(|Zky ?KSDy.&}ڍno8i\NE:#+ 43ں($Vk!!䉶DZcPbcHh#< )&BoW47izߧ*Z/A7TcԒ)ᕽL=?|鹪 @MUӝ~: L޼td1*5~1"(T4k vwaNݳ3_Wg?N'31u?GCRUt۝᳿cƀĨgAN~=wFL RэDB{ N:36-ױ%*@ٞ캸k/ƻw =Ŏw5`Uq>3.41BG0ԜĖ11œе:LBCBle"8$aG˹,D S 3( ̲I%Tl dQK  Ok Vx~5㴐 jC+!˜Vnօ{ZQf05(p9w%l\v&{'TKDk?6=Lғ(JFTQQQB,LH`;K-2[Uh Sa{PM[; axf+DȞ%ܝ`("9T]tq嗯ɇoTjŎVI&x°":Ƅ)1nuHIl`PMyC OK|CAnЦG[ 1s9k圷bX媏=b'(3&Ai*!%58\;j_D!##=`j\%}\:ƫpUI,&3sezaӥJfpcˮ6zABfA:P˰Z`c*CFC 16& D6<>CUXш`[rŹ*Te%!J!7F"ԨH80W!S⚨LjsŸQ) VV2[I)nY_;:jxoI^;^O|A_iB\U9>n6 e@X^kO|?F:JΛ|oQAeo,0;OX8yZ{6Hze pzz[^8t@^ƂihJF6*Ʒ;[8 S;^%:\"tK%YJKbr~^В#C[*߽l49 yOȆ _YL>*f*kk$FKX/c`oc^ iЊqĕlyzdh#s;^>ouk$ml߱XUJTQZs¨ƊxbE)Κ*cx0BL]Z.(˓+%pug;M(U(}_~$RK4SP4dXlIY amR\%BJ=)]N& (x!gPi'4B"J!8щ!"BĄK xQ\dEhc( 7*&1Ϭ Luyt4D|\8Ȥzgt>XeFԃւa H=8Il݉V\ pN-3+TyZEpd@ `T&-EU8sOc QYcuja; -0FM+9 DŽʖ6>O 骞I§\B`Yy`t)$:Yes].Ց~9=jρs*+6e&R2* F,>P($1%|(,no~٪/_zM>| ngb\_ .ED?%l:I&7gwfQ*g,+ˌI4}8dϝMv撱!UaWGm %%2 )Fˁ3_leFJ'g8ZҶ1h2ZsC86wP!cHyCv鸛e9~Zg|Ky&u LC򯋨?û_\ܧ ݏ. {P~d_Ǘ@Au~;}_WƆܼ|77o߼}y8v?y9γ6־vuϿs|{˟/_ݼ|Ʒv)^zm}[˝~}'{/#=|ܸ~5CWQ'ǯsUa|3|y "lG}=4{~|{l۳wkX 'xє3e'bkFɂ))8w|w4.63{ (thA2DGj|q}_p+\e|gAl W]ټSc OzG/X?k%\tGEvbye?L?aScb2,/&6$.Z* ?.)w/חݼxy_mc!E%a ޤE"}a) r8k[Z=ҦE!eH{Kz8;33,g?$Ӈ|_.$9{x7kw/r ,Yl<ș(٧_՜, =眈_.gw} hqFwK"??d$3_%_Bnuv/,@ORe5?G(X)l!p+T7.GOlܔn߲bxף&?e^\܃ ۈ.\tW.Fۨɣ#1\J @>H,k=5i!Ho֬hLtX{٬xp7o݆i(-u8P;>#4Q,J4H(@5Am,UKc~3ʿ ƅvåKs%L?)Dq>gRgi&ɋ` I4E%@DH>n45`\ccbq ty|ڍUnr 0 Yo$/'!sVK&)nTW ]&MN2b(@#DԊ6ں@WԦN  i68ϓ4fh"9dHIG/_z(B ݒG!F2`c7Dx +bH- j&4PN1Vc(ΞqB…H4Nm#`%0f)OLBbC]1cthDI jmRcUvSZf77vDxnu)XD֋UEds+.Uy\z5__ r&kWףdݍVf>:﬑DDr7K4Gnl6e$JyleBM \ iU&1("$D8Dw}}}XCRu@`UѢyo_,~+/gG]gSڻϚ1HT57 Q?+)X2 4a7w RH S͔(RF܂@`2)&tZ(*I .GBXD5q!= 1iTKJpΝ|(LSJ5V FM5ح8 TBԻC#nHuĄ"Ue"kCL.$-m~,<(8UR-*y"%jRBHLZ,ȁmE uaAʸ (qk#ew dA>QaT F箁* 9u #Dxhݤ6[~jpǫJ@ﲊhJ6zZ-%zH(YDD{-QӠ uos0{%YVD֙;8\4&iI *gNpUVfwӇ]7vOajWkdߘ}B/j_?ant5>x2?4[\~G6;qu4JjukOi{k7Ivk%E 0#x44SHv$}]b:a>[kN&@VqQ12XhL]88]f|OxbA{tK;[wvߘ g{ư`Bi lF-_ر}ݡ 5'`QGg,65j;duNl'e&{ >]cטHщ^bz3m X 8Xd(G#cwci_kzgfwo[{+ Q)bm*,I8 &Swղ4= "岪t :E`O#̉H-kqb+S(ag8N"S rYB06EUb^2SwHY'0p#b˘DӔjA0Mb} rPKa~4v[IUJZ4 cMY,c dK"1)C nb̗r\ٔONٝW8+ף0aإ.-,qsw.@/d^]`7)YX:ͫrqOvNxPC['S5fG۵L۵E*XBgaKp{K}*m T"䒩 ﲗոMlj0<@]#Z G-޸i1pe |6skrd}g0GMF߀>x歓wӇ%bn O"*['O^OwNi+sO|V *(=YcUGewNYtcT!b!O^tHXbAPtak']OBBK@ZaT[xm!cyJ4* S*.tX+ҹB_y8m]iq0lrNZBW4Kiaw*}Y7}o/d֒С1Qx&4Pޭsd6B RɳF' $΅6‘|%0f)OLBbC|c@RP,=Ro%27,0kWI=-մQM|.g ut]>N]<@.'[|~&J _[^7[dy8˕MAdW3Ϗԍn?u?,q.2!3'`bGӇ=`5_m )cOtJpg-UP5na Q^Ux}^T@石WΧsY%#W%Ɲ>'UƾyNe_#[еB- }G֣[xcOp=48g]|A޵q$З`ot!cg?\6$J!%;"U4$T{^"{~U]]T1]n5 ׯ@A[3e,f7f yw9^)ٙ8{ 3;L>Z$>T1jp;H" wHt]mMj=ҥBoU\4.1^6QW&_*& *ճ"8Z ٝn`((T_j 0wt眫wŇ':ë!j_zxuܒH⮮\eVqwlJ3n\q8}W;Ovϵ)t?BxH頻zs.V΢kFrdݮ5ǬNIi7}ZAti&D羔VQ \ch⃲ =&=(KmT۟TU3%rqP &:ɔ |r"<J! #zi7 S.Z>^Arы1bs1H)Ζ:h=:>H؄p͒)qdg'=w2йSX!`*$ 5B+ Zbϼ(`Wș!rĄbG nGqEUoZrUBBiI&@A`EÄŚkCH!)wHAb"xg(OrNZEfa,CXZ4*MMN}(c=']ߙ(kU=`_Ŀn@/omR~::)*7eklG27c& xJ'~~/"佽Fo?|fY,gR$A{g ,_~+T륦ojE˝N\"K5cKOHj1$itzMײe*`WZE-.|jv|y|b J*Tj j}.>GRR:/!à+X]!I ֺb+4]ݒ`V@`>fR/~> ߣj~u_fotYq 4Q@TU=VJJ ,QցH2ߩ4PWCԏLQMϧ MoG'1\tFUac;FPOu2rӸq2eRI|(4ZEUP4Yz$Fyb lB"c4'áDO Q^%Ns䘁71+5N 4,\ RBsQ' {)'ed҄p͒)IhQףv EtrQGynلj&$䅋hLr@s%v׃nN;h_u$j&$䅋hLq=vcQb":﨣#aδ[6ڭ y"zL #VWB9-!}r^?Ѿ^ǂrm!EKHmZtf| tBKGCnH5t`)AaaOb ܾ'OD ozt59Ҹ~p׬ qmn9{_]LWW邯RFx~wOK?D|M/H*K3ħe">eч0i臈O!x qE~RudU=!OtIR&WwO}6X(L sdѠ%<c%Ds0 J= 9 NtQv+wb .. > VnpL22=v%LkP11cRѲFDV:CifhBV=Y \pc4耘3ArSи+e ('l vteׂ3I @{LZolx+ƴj̘)3M I v ^!>xֺS 1Lrl/4`_GN)##' $~ CWaXR+ LBX T\hp1NI$ݎr$"G&TUq ,V@_Cu!" UH|*8Dkw]Fm4 @O ?>lu9L$qd5JvELpqS](Wqnua~i_Qƌa1'id7l;UoS5pC`qu,]|NOwfQ{[OFvQ\ڝm=ތjmĘi*8;"k FOyAol58)߹"]9'qi;i+d4UF~Ҵ2p2r(wl&š&T jcDHBj7(P'K_h\VGM.PD4-.Uq< V54H DJ>Flrx`ښ-R#mZ01-}'5_R,呆bj f'c;EqyTQQzƄpBZau; )v{f73~bIW_KFEi%g6:=r_z.*W277_vi~L'X?Y:#:+]XـQ 9Ø8VH CHΖ'2X ,6`[6Z 6XWӈ"MMxx*tqNW(=*f0}<>V}~1d ⑨-p~{5b_πVx;|07_WAN :P꤇, "fa^>V?93q4Q펣ۢ]=S=9JSJe<AmhSUHo\^:=cPn 4)-OrƇo,~%b9,y^%F)Z[#!xt}.m8ԭp>b+vGv䞑_ȺGYvpw3ǝἼ3F:-A G!ӱ|Ky5ʿm3(8PMFJ[0И|vߞ.Ay)atл݀m3pWi7gf>5J0R!( A}sO=,byv 2vPk(I]\;Y5y9d&C 2R+e.Q*% U\+*UANV~_Β5KV'UHb'a, o7o/sߚ1WB+aݑ0v@ ? V?`G\)I1nte#qn[:%r+-+_E**2a{XRrńpzlhF0[c48-ŖUGN~}h8.{ E"JHIMOHCRcwb3gDrTp"_W&4r>I ʨf`F\q[QG3-*EG n+8H)7NpSYa\ &WIBph}HHMq"V袖zG%2  %u]Z.0AGA|U"OJ'[ H iƱbPͤ`Zm]Osu3 Ij3*[MXEͨ5{bt&ZHqoi Jb^' _vyROe뛾9ZöS uÁn3ܦixHc0q uao.]ճL4;jILIʣ[wZ}4weIzYzKA<gm{^<*jH.{Y"YubOcUQߗa"OW:"WWNf5(vӼ⯻ԍz8wTb. ݋|Y}9SxPtiigz"З-Z?IHvp0{N~sj;VE8֪VAbԈ+ ٤]!Rmg.hW%6Us-Q֮bCeXmI`;ai)BTWx*m{LĮe\#(ii\1`8Ck}Q-ф#%WP) RA,9wy q ۞!7:zV Bσ Xl" 1ڥS8N fg4bofi#H1uFHUC6k#[X &ϠJR z}W^či/`6Sȟ9X"_Q3;xK%jPWܲԜhvz/%1@{PC ^-"3wѦ>⇩-k}Z'k<=azx^>_կ V_z$@izG*ux *2J(yk,Q6K/nR)^rGD:*eu49JZɥsW"C @pa坏} G%K*'YWwjBi]՜9H]䉑gSzבcW4o̴{{ut~K2JIG(4)ml~Nc CA Z]&נ-m?FؙmtU)=A1(ZdD'.5GX64+H*!ǎN%[R3V E-e|U̙uWqZ`SfwW^;w,3թv?rd:YB:ݿDcˌJ1TXI_L:P1B=yLI:ևd-b:u"tPhA بbF#윦i$LG'}xOyYl>Mt>~?G;}z Ǐ(R8풔XVٷ"6d/ܻy ;ݾS$;R֫aU™ߝת")B4g#fNw>8w"M*_ o$~̼#P#3'&rl!>AQ?,!|je@G!.kFXS@R EC\$h ;HE*jl ?|0)39 @_G/eƑ勁 ?QbBB_UX3(5Ï)Gu+Kt2J JAՀķg/lۿ/V1Oj.nPݬ}}ݬ}sN%K*uUZ!dҕVXw!VWЄQ&"P1!B1a3AB| Ki <bL%GI $H""ơfTD98SfGPLb%tR ~tY&6ygM vJB(*™EBL3,mݐZe)6wѸu7Tܺ'~P8zX_=0˕}rR?|xsϯ :ſsЭi%&Ŭ3wWl8/\|\5nu9Lp}Zz/`l OwAJ5 I ~ǒ "EO^P\SY!֑z 8!!M?!(Xa*v1]7°~#Ҝ5WgiCys>chSΖU ` 5&랽yL:>j73K4&RGkșlD#ZpCKSk@;dc B8kohׅ}: ]|k[!JmG5c(M+Z'jTa`k_[GwnwmY]:bյ%QĴ~$E ѫw9\j| B2g7.F (uKhH5{;WQ[ b4Clyh%(2yq&^ Z@^Gc]́ !WvAZ j7N_ԱMjXV-.\UnL 9Lk)Pk4B I;u*K f(e-U>Cs312FppʱzZv:W֎$)2FbIx+QT]5GAEEJ\sDĘ*3itXj2%Δ2r,2BTqaw/6=KdpP4_tGC감nQ6I76+y7:ASAĶQŻ}Kݢ;감n6-y7w tbۨw:%ܦ[|AwB^FT`JWޭ>gAF{$B(fb;`!/DO)]$'+W>DƈĤؖt(n 79`}5Яv+Q:խR W3}I}MK~A i nڋ-.4p>\"tA(kq.>mج| ehBhP~rQ #uFZᦈ3s3yc_ V_?կlT!ӾF3G`WTD~ ]J59B!3oh_)KŴ9^ViP|W7kF[ ^7uEyt m"uv?s+κB&Aw9Q%f.ː.rmU3)U-SNY:ݺ}bwDtx_Œ|zÚ4B tҹ/ 7u2)^w?5)$OWKwn;y6߫hbeG|MQ tB|LA,IВ*!F%:5(ULdyjfYJ$ۯV 9Up;R3]L5h]m NοStLg6烛ASb>Y߻9L~`c4 X3JSBn}&ݻv<+Aa4g 'H7(Bu| Ě}`>>a5P ¼_ߌKWWg07}*M3/(ȿ>nr~qϏ0|C+4fO_LzɧoRy*]&fofz;&o}tTvR>GLAߊ^$>Y+APS: ܞS1rv2ȃ󡥾_Tv6 Yb D8(t9(ǥqF=7t͗r^T“֚qM/ViA@P2!<[$`TbB&   toA@H rd9[mon'H`b5u3h$K!7P.ҡm+a Գl? [ *0EDF}3)wzC̪No; ;a(W|j^Ohh+~ANqc(K5sۻ]\l* '*B|PHK ҋS> z͹F[5vg mA/_a@в=2`̎ȯYÖ+򹠇PbX {s0{X.FMU4*mqz8ݞٞ"B8 KfJ,{}5IѶ"5f j,yu%к YA*8[Wߞ-+OClhu#VeaLFqVxMv]x7 sഽ(Fq}0!nË3gXem%*ڢ1\X黇(իA 4hƠ0h,\(q4Q^iF:l.WšSGN,%]̩҇k/88YWB˽] V$1n8֊%HSSf1h±PČ%.A:f2si#7:]t%:ovu2/ N6*Ѯo`=ڕ)v y&eSLW}wSnNlU9Y r-MwB^Fؔ*&nyiuwՊ6-~>]REd~[R\9%v!-IżS&6;Y+a0L#s篰'[wk .Cz>>?L{o!3 |<{PܪؾP+eZVJ4QP)r V(n\R[>(nf#gWA8y&yw|o?&_.??}:͜fֈ*1VpuӂH8taM,t3%ꍁQ\QyfYxjtvJrgQ|UGhK!{hr{\a1û|ʓ~*'D3D|^w/}֜HFO78f*dcTB!o8D%;kI!"x vp`LEƉV)&qS \*VYC)Xhk Yp9"R*UBe8a:44%HK10]Tf ; 0nMbM HȈ,#u53tO4eV,@jmo ~OW?ߔ΍&pco6Y(f3,qp,y)pS? riҟg%&JcE$Z0%hXF"]EZg}:",% MT$ DCB#WoblIVfC:Q3,4ք*J3T@g #[ZARn'g\:co}?d~a$ߧ9ܗ©⯝`B=(ak>e'k߂5U)J\_2Jp_1cM҃e| ,9$:K(HM3Yp˜bG尡+SXHO1w;J`53|ށ7}HO*ױ/C.FjSw?A-ݍ3d [+}\_=/_pbu6o'S7?q~N* Q{8= ńvu!*⬺RR&(dD'U-^\qڟt?y>Kc`&)DMF|Z v\!"҃NpnRi-L2|_&jm'EDwT(if ֠yK^r5&/@+ w^OImve*f]2wȀa×"(_eW]ڿB.>SiӡB ݯ]ct =}/`jɢ&-9@כ'k}6~d?0 A)SjQ2}8Boz~X-l'zПh{Jϊ'(I^Fnl-8uS#S *Ei"4;Rͱ`K5 yáTp!O?że+v:B D4Y h@0w|)!w7mcD*QףJF_i-~4L #0> k4|U7RJ>Rhw?0f4nlE G_]|3~\IJ THΔ+‰.=qbyؙ{q`a}rTUB^P@:HlΤUU wZ%9@;gEoAk Ͽ6ltj]ZW-2UOso L|x [tgW"3,[| zVw]:-Gxmo?hOƨG{[CꉌjVj z$85 I\"ߐ `4Fn*qTFO><8F%A5ѸuA]<,HԫBy!H ;tqަ6-˹=G*rOj7_7L-|HHMb"e4& n 5uogij+mDbTT` jtUP-ګWqRx7* ZYGI"1XHhnBD&fqNkx"vYPLQ)ٹ!u)ÈKgP.]#ʔDyv{z6X.I0cpK$3&h41cKP(M,׵"| mN;?Ts`dBx" $%&$*%$kD6M sΗ=ɹ'TPN6yO{z(zñ_f10:sDbkIX⧞zNTPͺu%B{nGc5A0خfUF\% m"0[99 ̦)L`"ewa,AcUi'.?~i=@$hmU6}o?%0yqvY sԅV ]Hk/Tj*nQq4uX*ln͎v@Eje+pB*X  !/A2S |E^PHu/ޔj%7HÉIN؂OlR]'l+ 6u[ ݅)s,Dz$~C FLa1$F^r՞Phia9ҍjEkp.;0V^Nk!':UMǤ>&LϭQVPwHGvR霒)4t+PÍ\DFP?FPMBh14ާ&1 7x0n$;4i̍$DhQi$@5bIc(7^bTĖyY *[T6[(rҞ>l r{ Y٭χ5k3gw=$4=gI\}Da*\;z 9KEJsQ%k kl_ݐY%(glmS"tbkْ\[٩%jOy;||u1ıPmM?9hH :\A 8'ʢ eewbi-&R2ٝH`PH`w$#XBbH`5+X-MWjBVPCg Zkv*"\2RKr.!}|qUD%T/cBqN+=3vx:wcT%ZkaF,&J1NIJhb"ʘij ᵘj)TO%XH9I WBȚ;_kkV"A0Z:T >\ZS^n;`} TObu9z;Fl-cS}#ٖnn9l9s] 3xÏ,ر|9St[I:MTMp?x +&PMRwx=lx:@3ZQs D>O X=h.l]u1ŻS0ԥ(( @EfXT|2RdrVƖ}5]L`.ʸ5\k@pk3]X j*a^RERT_!=VBT17^f9s!jK+Z0\I6YJ‰m%9 8]i?|rLda9w;#RkNu^ y'}XK5-ہSPwe.*ȍZ &Pc%6^JѪ>sxU.tv&t   N}KҔL1X$D|!:Oy%Ԇ8= qF4KuFg|f1|E%& I8S(*d7u5JTƅK>f8ڟ .;d"o]wG]~4SqKI3ꢘSW'!l1r<C6_Sǹu)0"?ԕv>ehOxT>,(3F;Ly0wk+]>#m3} ޼^Qy(ݐ'%L ֺ?%z1 ځoNvtW)I rԝ}͹owD[BA ƽ$=NbXVQ'"$<95<yQJ:tNbk.%װ$:d#VS2.UqNJȗXj͛OokiͦstAEV1),״(\hU\h8.]oe$EmnB[{V2q|ᤏJ@h@3_O ^4J8/t1{2 'W>Y8~dKJDÉ~'+%z`0h~<SHs5'0/ 3Du͕5#g!;EV1 s!c !Q#BdlTVR KԄpAIsFv~ǻUO[AM 6iqU0"%h|Y܍`X<.:տYh8&OPpaXT(J(NTa00##u#eHTwm.A`c|&TgƊmnoxD(Tfh~tmHJȣe=ApCGP}.@(NQM- e4`skH\P9MD0"D2n)K^$"8J\q580A`R sLq-׈O#)64iۖ(N)(iJ!RxeE*I1M44=Q3/E0pG@+3y"%jY*4I%XR)IZL$iɠ`0,28NE@%Л ;MY"(R `E0)P+TL)4>uUͣS1+ok ۺGϓG;g(S2͠#ΡΫh:fyD|׋*wYෙz?8@+J>~nG F+*zنrҍY?|\tw~tټT;{~M9ex4[ rmU,2ۙ[ ,Ndj6M?M;Y|洁~N&˱LC. >0oA}A\fbGh@m(g#75{l&ϳ5lޙ],g/`2"wṪ6]U!.Ч8B~;uS\кbuBQź]WUٶ[pC}[=7Ek)1yz<;t0M?dv;!^14Z|x{ƣaze5~X;Rp;O5ާc][+ttX ӲD)(XXG aXI¨%$ &2,NJG4D78^qK!,$88^ Uh w|w5LacJTY]$a%h[D(;gbp9qkFDSDk.T_.*ݖҮr)yRvOD]xchrڗ!nRH5?)O4Ox#<1D[ )l4aQ,Q>SH]-r WJ,:V8U h#iS !M҈qBt S!Q]<&UvޛG)!u_)*|y=~:P Pp@-Ut%JO$}DtGg5ɕx[/?nuMNJУfļ*9 ؿ #܏4B]ܝ.N}cꭋ?ݹ(FS8Ȏj"5tq#-] B@b(lJC[ũ@+5krt9 ܫ48B@x&&%bpᚢc^8\9֢Kk=k׼ t*eeKJi&l18i5k4%Ȯ*7om! haD*=*JvwhXxOvLo7+w7 ;A*;!jV"?j] |پ,.ޔ PT"n7}ƓaIHErH.(Y]gDBHј_3,^PK Y [2m@zҦPvHa7(%}hac>@p,-z+,IbuuS 9֢v6o]L!LWP?^ k.z ,joaMb\[qpWE)ݎ[^mdtv6Jxc6W]-աg|99>4NRr%9H>Tƒ1N]_󂳉7GT to?I/"NvG~R۸E 36ԀlQt .(6&0Q 7z3֞(䢥uA(t)UC|בwշ٪" ~N뛉QV}nq8 Qg7mI5/L+-ϱ7dS+~³I(UySS4%Hű Ad: 1YD"+(Kcm />[9*pdh7?KKQu$_(6+Ji,s;U{x0__"o7H,&]LJ_m>w1CQ>G ~A:N2˃PmM.ۃ?b.=BP"-{,V:(Octd_//PkXYy@SFAy k{PJѵ;G +yٍ^m1f\L~yFFrWo_\Bl͔\Qb7?c68!D\ '>U M*b#)5Xi51:%xIK0ͰF$Y|XOIb!M\-Q,aJh IĂPb"# 28irBT,ɲow]>/Ic5A"x4)U3H1H sy&JLzfI D k l7D BA!BRid$RۏҽhsC3ܯA*k`*#uLԢH4S+`jmDp%&(fV#/Oy#pkkƙ(agۡ1玒5?sDuˈ!sp B~RۥA+;G%e=:pFO$Q,~^'l0$0SvAi2AN~wrRm[T3$uɢצ Q(}]HWD3 6A< /Sj9 W /D+{;Ea-0L>ygX]C9ztͶ(l&u/ִ-J%smW Veu!@:0bz^?AktjL0׌bym0Bf8@ B>}k~2> @PB(S| s*G?3wPf`քE&rgߗEGj +<~X!I{:[M=d_->>\>ǷKh(4XyP<07R!:&w958¨BvdWEm vnQ#3__F06Ͽ rh4Ax[; 绢ٕ*)R!E/[bbҥm$oW%;\Y#w)8'GOK([LGOuN`4JK띿͓,RR1ӡHWɿ\pSpܔ5*ܴǛrhg(^p|&(0<_ZQgm ޔ.6i"jM֗5-AH5;ΗbzaLϟw7_FVjL)q/TF&DMp4['QIj5RhFJ'ݨr 1[~+Jk{ګs2fw^nd3E;~kp!=L -bjO(T:]ugǣr&M@ںݙlG; ѫ?7~(ZA]O]ER 熈~+5P}GS5q,5:h,-l:)MuVG@DIF?$h;ހ1֣YOgcxl)Bk"Uw_DI1K18C~'vD=^V%IJF4//;:'V?b8X^(b㗸9 tը={WH`ȡB`eWgf4#@@9EYXVxv5;qa[S|2Ob;ޢhz0mY=z]Y~ʨE BIƤFc$Ct%GM#%>X5.폚j AfUx0FO,5I!R=u_M}SEN塑$#=}J-hZs a{BjCc\V)euy;rTq Sm1藤m{ʧICdx{6`+г5!*1buL ):b5Т y9KS2DRvx̅g$2 2q20h^6r3lAns`g)P\ZJN;dJa:Ӹ6I 4q5&Q0r$`w|>)pt[P )> ?oˑXiᇸx~PhtD+s[71M828~7z>FX9*ٯy,3O)F5~ҟV ÓP J|d3tO7(߿`Pz5̅,h=g?{xvѮ=Um]&9sxB 9(b8s@(22UI"zSϏ%?ᄑleWFt&k!Ax\^Q;s2 |A+2*$?/A H b^U+yWռ Rԩ Kp"Sh<E>(׎BPf}`唸@5s2EdwjV.tqY) _<6`ZA**y^C \.|5` f:.NÆ9q6]\xGH=g n=rŭK (p+@ Sy0&o G@(:SeO@A1l *PsW!O Qσge*N_&6Yƌw ߹]Ӄa~U_7%F~w\FKzr(P3Őnπ C<)L-r1OIV >?t$;w tp{G7i!4й]~{psg"4;SPwɋ.iQq 4-?KC[yC@7Y*k%Drn ݝ=6ͼp/-x3$ARcHvf'Gvo*[abJkH֋F|:O虒 ^a1k@G}77ߘ-Jx-)N(c=Q}H>-I}BQE:h4Q'N=>n)-[.G,/av?5Lfblw5efjkMAe1 ?ほak(Ԩ4g6+X`7d"hnʽYenjz],Bvs,gYY[mH«clk(K.H.rT뫧É6uiCDWy W7ɼ@zIqoݰ$ `]עlu-M"xHO߃d0)hjm" 4^,Kh(4XyP<LMy;RgC} 5Y5u2AyK/{mZ!iXދ/w"eO|12 O˔;_+\Y[qStЖh-1xqC7~*k<{'{V^fZ˨ O qyJMypg.UiZ)BO+|% t91Q3. \``'@ PϽḷ R[c Ш{`^zyK4~sz5tϬM@\5ËJh:*V+I;׌^I}R)"*s򂓌ՙufȄXaDкkSVi)|3̢[-/K?" j/(+\bGtVQ^r@JEns7}  PnaGrt=-Xf1+v|.aK1*Նz'['gkҔP7[{;ic$-5Cv.C],;)k~zp+TeԘ1̹M{xzryՅgd46w 6ah sTSMbԋԊ>U{ʇe7XͼF ;;}穣p9u* !"0i|h*9Ĺy0 'ܽPۖ-: Z-箟9w"+I qܛj]:r#NvgM 9a;ԅHfjBYf<%kXY L1_*N gx+=B;wB;S hYIũ!~dg{kC#-Ha ]dB*j>[k!45YUEfߪnrWO/(Ya~n?sM';pF#u^z\Uil?zרz8p(m_EA= 9#A~ߦBA9@c"!9Zy0$mq(jh`lMI3N0ԙΓLUZ.id KM hgz2=y"@{L Zd $2z|h~:$?l?;m . #Cr >C$’J_ u^CkFkV(9wvDs\p>t7'#fvwW~ts;24He.^m*q6=z_oh?.gq6Giճ>,M|5*N\$G"ٷKgly/_S,Бp͒)}$#rPb":mnr-Pz&j.$䅋hLOT+fjnNnmۀnI"Ǖv0-ݺ.Y2vҰv\v Etrht[Eo햟Y hړ67%L(1'Q9ԓl]KC,rtė{5y9cJ3;֣ 0 C te '! E+ N2"* QHдu*TeL>c4#]{866+U:@T!nP@d6G8iUYW0A-y"r|,g\u_<˦B B*vO(D.(Dhcbr$dL\ONPij%r$C5*cpeڴD%/5$FpEEVf2 }Q1_O7 V}崶+4m9Bl3u0ākPFXGJ/a~-$!x, %87̅ \˳I.j2) %CHCU1ޫEM\m;f_AF%0߿k&ѴODrޗgie{  ?h7g.Zxm'#Z5J+;kttJsӥ۰37[]N暟n< N5bgqHéO0d~eay 71]6=uw!f{gp!d'lai Nb]dӃ|InSӑ'm ~!9|zpSGhC=hBŁRxåu\TT;,]1G aș.^C͵lZQ^sR4BT֑3WVJiE2P2BT%!U!j.T!B`3c+*j"Zbo)}P8Ht *o*QIM*"+iJJ퀙+(͐}$]" P?+hW@[k," wTp)U *%N*Ha@Q)KJVWF1$ %rnm~n95%I'&BB^WMԲePb":mn}ENLe4Uu!!/\D3dZcqni7*8rT$lU9Yȩ^@Qu !/\D7d*1{c"&Uy%3 i\3ULJJGPFW =jbK~;%M RJt-)J00_sX-2P>CQ]I.m&@S$f5d vJƄ w3 $h#dQ+25 /hZ fA$Gv6^h۔К( Ö>Z) dfE$AZvv ̚ŰGu)bs9"4Up)\M4gة]<]47:~6E-cBWW92d4wogbYPiNAT|fw5D ZĹq$\ J-DU8 4Sgpx/અ禼bKȊ2\풝,=pgi<ꯁAJk?ܿChA0uDf#uD7GjPVL$ fXiWK:ZƆIdÌ (ú lyq}G g`x?%oCX%d\$=k\4{ɖo+ UZdI}8ٴ-0@*r2`0&Z݇Iz[H^iڂ#Au]洃}ry]5z]s$iǺ(蚊NZKcAa\-? S4!K7e x.=r~4?z9B_=dob -LG8-Eַ348neHh'~-t??_gw9j4Lξ=+`/PJ1S@y,A eA{JKPD;E|%,SD0ZFX(5 'rRj"e9 8[]!2>q-lԦjM,gٻ,,zcauwS'w||>:?9b1pp@VfPYuO/1XGOoGG=6бT= )Hk$ bWv!5?3mn*I~N0Ѓd_' 'QbC'KI]漏~DXOhz qDs]mo#7+veeXL\L&!'ɳ,߯ؒԒVkT7h5 ͠<{; bu9Ej0!b˙ mQ.9d@E&1O~?lWj+?`5UPBC|ޅst {em.)f H͸n󵫷^>N_F/m~6GhD:]"wEUO3t!ʿST[.'kTZf*ʮM䲢iDVa}RHlHn1wհwz7yBPBcf:[e&'cҷS<I㨻фA-`KXU^ u9vi$Vo ˘\Xm`>>Tze_G^}Bͦ#9:[ éCrN1-J/P(J܈x>7.c"y;ǍyFɏ<ŏ 3̇pijl<]y3Dx9e wxw Of%t-KG`AHg?\u(+?\ ӵ>%.\xOjOYխ$\wJ[v7ڋ6ʱ6" Yٍ eHz44G}\# a'Um{'}Hi+#lr:sq& .8tL Ruu N ̽yjA" k_<@ > <`ʐ@I<ϘB;-2Wx33C@^m(,`$.,~>\n>w^Si <6Sh1B 98 ، ; BkSAF߅&%MUKND5ւ\$]x_/NnqO f&_+okhCS\~1Hu7P` =bRK#Ջݻ~MY 67̝]^g'ԇ,|HLnV'R9ϻ,o郇a>}٦Ga<ztJՓ@JWPAyTL*Z;Rg:GNٗ2 &(;#H[ ƴtB}V D 2ʧޮ@R,]E_~(lYp;|q,$uM8}n$v[bĺwgulm lRXKLD:_%iܵ C/ߎgٯMl2<E;3ph/IRpGI,١ŒWFgKp.)$+r`y<&0rq,f|Xs{UR^VKj(f41=}q' G8gnq7)Tlp3~S(v)"8p?J 7TO(%, V|c±5N*Dn*D5pylT {/U\vN ͊w%Z0g TTE/UR0Q9 +>T@HTq}xT`jSsJ<V\|~fW<#.4p;6Cipv֢#:m᝭gk "溚g6O}v^ ԓ?{OH "]v~F[uNX 85x<9W%E⧇$8gRfهp2~(-ވnADZWg|,36gc 8xͫ0ͅЪu'Nu(Z*ؗ=] ٣ph$ 31-"leaYB8/R4ЩLJc0UQ]3ƃY|lfWIvzUU3j .p˃ˋo1Ohaq,N5`Rd?9VNه0̑(m~ =4$uuQ>C0]2jrbG }T$F{scpeSU'DePc=Uyh`)9YĤħZU$:=%( ڭňs#(%5vuZ_JILo7Q@xǐ!Z bT iAX/ ޗby]_TrU(^,j`Q}C(RTUk-[N`u7ڪ(A0S @oUbKr!lP谶YQ(Q-L4aɘth"t:tZի!8ZK:,ܨq*8q:8^=eQ2x^KVJHpMpÐqi;0 )? q+bgك 76/ ~(0s S{ۮW2(;=Ewm_He_|eÀބ{?/>Mx S^zsfݳReA5װX,gԮ ߵ/Kp_|)=KQ [(Q9__;ELIAe3ہcۓӽ)%S֬2IKosἝ;q|ʙ\2c U Vqr"Dˌs(ZV!7ΰ@J1$Pi)rJG6EJ-PT3`R!uiԔ5CՕ?J@4|t^W0Zv{O-eT( JlvfӇjOG1(?G5O9ȉb B28d;l.EfT $Z! CTK4@H ;fH+Pq!glB,.葌6>sHRjƜ =fhnj\Hʲ2q Ba~bHu8bEQT#0`ǫv#Sƶ+ "L#W9ENqϡM $6D.[nVwY 8Azjqk !PX8MbUiEDr\l;Zlz3]|2dIC3/w͋ƋŧYMA@σ WƄAz|jS/L{uw @ 9 sPKQFptV (}&[@HQ]"[7㸽CLF긵0^twVQKć(Wyi@W۩aWR*2 ,rv楑⚵ʹVIf9&2JWjtS25G&g iv{2ւnws\ \Χ[ss@a7cM8߽ 7||z&ǹB#\Y #tynṁt8Ϝ-bZ,皎󡐩7+_d,hfmվaoƂ7mZ|o9URyKy%*GF1lva|!ڂ=E5 e*oRDjMk nYNeL$ؐSfL>3flwrVFBo;R9# 55ZObO [7K NGSoE0=|,X5\z5J?닫fv5dWW_82)'Y!|cHt%2׶A k^>\N49{2ZH48-#Hb9(0D{{5/(KL~DQém"+z+Ԩ2}w>/OpkbÈt|1kȉD =!9~IyO "RD]*K غ| zW<2h%S w9M7aߔ|9h8Žr8V$S^ v(68rO=̂8#g'E-()h)卥Za̤N )"h9o%s6=c@/v";⪛ݷ`/[Fuf! g rqR21A٧ԺHDCLyչx}J__{VlF[ŀAD{U!1KDJY_;S jJ[O ǩlآC6,QGH; p Šc\9Ob}(DO4uz'`SLhsLY ;\gդS8j]N-Y^Luj}ur AQZ }w6]yW駓SΗ:_|U`ƈwloe;dh1eZ?}II0&<+n&_iIg%Dnrg!L$}Щz`gISNch2`@CH"J;B5y'@hE]\]Eb ^o`#\B*6fQԳZKCr` ^"~a14dV`N 8NVsWbnVOi2Rb3Kh,C h-s=nI/Pw}ΖSW\Cz_  ^oq rY{VcsϨsϨs}4DQY֛uL;Q y.ħHPSۃqnѭ))U+Xָѭ y.ȧ !Hݚb:MQ%o pr]G Mnmx w>Le5DRP&x 9c{J횎AZ|`UBYE68ua`o &u7#u,0]*6(leFXL @'!޼%ף/^ãG_vQ<76 #Ga q)dXZd | @8sGcǿb0 Xk#w?᧛&mlge>nmi.c?'x}\0{?286mEM᭳.P ᳯ+n;WqZӐ󩱳IcA#a!}S"~XSuLx>(KфC*mJ=NVb=\Vv"z}fA+[F;K*5{s{['daRee%0",/ENX*s|J|W~)j6 HqvJ86ߝGI&RB1o9ܜ׌܎zߊbLz"ވIi̟/&gbfFkй#2To$m,JgS2Ԅq[L8;\g+Z7_S8l)c” (ǚsVS^ɦ'T?yxԳfkW=u)xHeW|:;EwuD :gZ$j^8k{&T_ ]'k-ԓ)|,{[ejќDz J _8/y]&c ''@ OPSU83tS}j7H@uDu.S DG̉K5qWcuҘՀqԤL}SH+ڤp)_Ȁ tRe%#rO~r+4]{F2YɸN5Qjk&U $`T\j@sۡ:>:IEЯ+Sycu> uvPkZ_3NqozD(g񮄣iS9M++ږ2.PSδU:^ 9d-KwWheG8rYb?ILTv]cvܦ݂*?mEr  P0-'{FYSe\;"n+ g!l@Q!@x)4 jb0HBmhܛrnaV|O^}Z$ $zC>Gqvuןӭޠ%Y9!6aInFoH.C8<ۨh.cw14! ;:lMl@9}TINςS)!\j-3ĿN_$IBrzYqHKDڅX!HX8w" l8)d$'8 kG{ =?ĩ)I ^jR7^eiSh gX:l*L}G"+s"DܰƂK{')䀣AXpm=YD;R5`qƒ {Ƀ@gGgyOYPD4F7!O.٦xə;E6ϣϱd&H0Nho"E'ҍ& 0ʆCRDNHy04xp`+$!4K L3$(>@TrG,H!򜤴אnIc嵡)7:o0fP*RN[̙B f!}g)`c<8)x:|yo Kv$1<]x)1SDdc$3M ptJJ2vLu9Ҩ9jT~] ~\"O.rt%#Ԙ\Y6$Jq!qUے K# bWDť'r@P暸OJ)뚥V>pMIk m[N Lm\OE $yhȼY%7!dj $2^ΊD17tKlveōogN:z|G7㛛WūTf:s<~D?h ~I FqQyuI&cQ޺ћW7nif8*\v2y6~kc^Ǜ۹[f cr5 Od .N}1dh]`uV<#rM3l*Ċ,4ӳ;QN[}a3ƺ 7g$]Qi@oR};eO BC{ygu3o@I&%s5cUilI"ŰjC G:ЂD!DAQ؎Tov"1UXuPHsq@y 'ȫt;"R3)&뛗N18#$~% 4#wV4C)&HC'Z~g(` qhyW0ƕnRͅ \&T@Z/ -pR!Q0p%֎? A;1{KJ+Q(mBcdBP;8j " #.)j@MT`l=pdp;QBm*FE55=}в<9>ܙ;RjЯΪ_WTЄ4Rq]d` Wojk]r қu@PdG^X b9"BA.}#:BAOICt$ :$_QAƥ;h0-K&$TVz B7L+5;͌d+(PNhpVR)/8˹+BQ]JÕZTkU䐻4soȗ~ԘU$'D{Sݔ0[d&'&ZPG(l0EjPT w5ENU;ev¬=l߼շ;3,z ; -Pd܅O6:*MTBxV7[4O>wh痟|OR$2THvӯ!ETH$/F&b*7k_<_ Bptmr ȵ=paq(ZDlU[NOdZ~R#ZVV)9ں1p%$DWqL A Bl XO0U!vc{?&?s9N~fR+_mIC&ūv~K\YnQ>vM9sQZa 8eՒS=-͏G9)¥9Է۳fT b.L}G;Tw@hZqC5E]pw΢).khE0Y!i vz`,Ԍ͓RuYExiЂ!iѴ55 ;Ǐ pM85W_ylZY=`g9 Kf(XoF-++O߻Ň^3gcʲ0hMt {_A+ڳ׾n\t LJ#φ'1g:yi)'4=Bq_%pmv@}\Qr~9Ro'7wP*iڻWկªZZ{>x@6TwU]j'\.AZ!_+(u @ePTD+'ct^#WQ@Ekhakk+uWpUB`42r?v9Fc#6?5%51q*BTKxu:DIi#r49:(8Yet|5nEgDaXJ[GKrN@Y0ɠIYMANF"'ý֬>ys~3k{O^kyi=gS)at%%re^rE8yav(T%>#hc%%!' I6C!wB¿/Bh\|~}.n2_P^^ۏuت%yY['k@{|pI÷_)o7r9gf?!rw\^u2Bg&~ E|wH?STJ(/mI01a\Ln /P JAk3*2#Eskxk=*@mǧjMS֪ E-½ !ŷGQ:Ͼ56]D5H\Lu}ÜS2h^oJ#Dނ(>$.M2a˥F4h5J']DAL)W*Umr ` [mEBTA q/+Q8uB)4e^%>ɥKA`Dr)6Ọ;t;\2$a$1܉:hũ*'e \h X[J%ejR+i\rRJcdT/s\Ծȩ? R[90d3+'E8:@Q~NRJ;0F^T=}nogMfigp)jQ7y{W `΅]d!7Sj3rGiD(CDi-UR3"X R!T!uy3I)mMzc`:h)%GeҚ LG׬q0}iPd7p!4@PoU \V4:(I(ZDld 6to֢Y0<[7(*0"@[)opv23UL'XYBXҎu@N((a+>ͳ`Bg/o4Jss?3|~"8A0娛3 H$$Rh4?_3žW@1Z@&E$5`ݳ r!L?.u+|BM59fH^SRج "GN, T\;Ӷ0x*xA*dY8\/Fw3OR̼aėR2&X0 }\kNR`ؗ [ ,- Ǹ8GcsU*`tPx("pSQyMlO)RHRW\K(Զzmy] g.IQQX;tw%(=@޾ SAš֭q\ O< ΔQh`ȹ .)g%n%͋a첮lRxaOw슒hmȝ2Ȅ4(~F-٨|^qm!rJ5D 𖆏@җtaRۤ?NYdWq}9BF\n|&J+2 ϸ,4uV:lwB'NEUmΒK 1TCB.}FO[+Ru lygc&O~9F? CYH=MBM(c]L2%L1'<))*ɽu I(̽ %`vЌjYCHf%^TLTB*9y항|P1_(ٸQɦzF@C]K#Ė{ĶeH#oH@zTg>g{1fCν!hn <ٻ77{]৛ wz[W4[xFJ+4t׮:^}gvvOGywfҭ3n'odw ѿo)KD27gvjӃn4cGT:efE9ѲoiǽytpB9*薄PGEC}?uJǵg63G}O/ZJC'AflE&1A^QPwD{ĐdL2Q ݳJA$V !Xɗhrjok`hv mFbW9#BXR9 -+ҡ[>.=)RYz8F %\=hei jL{}r )WmO:Ex Ư=vԕΧGnİN;H#Z\bniҭ Yt"zʈbZW^:EU; v-[6׀ FudGWwOۯ #cCn*j#AEnCQxyLj!QF{?[1F)#*҈!5EcĖqI\rc2Z8}VnXWz^͟QEquMÃa4 g`pMJӃ~]U"4DhmʴJc7 $pjE.JJ$bO%%MH("aJ!q%Bj06*ju:< J"ghu#ap#T@Q$Q z#8e% cG(js+ \ \詷 ëwwO}pn߭'m(TUXoK#)HjrCs4Q$wENiJ.wTNBQM<%q9I*hד\o0kVL=?'^`%w]= {F a{7P?Tqaޫ/x?!l;C1cTCqD<<a0Ȩf;fI~/n1q ̀{Z0xXF=Ç;x:i%ɟlⴟ5 FF&L鯽__==بucaj[;C#M=:4C#-@Jm) J()Хgl(JN,JXzLn}r[ c'0Q)dN @&)4%?{Wƍ _;3ՍPrnXaY;/@8$[f}J'w_g{eat|  _ F%le(0eG$3HF8YH:!jF|i`p!y)4D@"f([]}+'~V %`a믌Q!N`WY‰%rIDek*YLP5񴠳U\1Q^ XߘW2)_ʤ 4G9% =0u_}!Seޠn \^5X^& wEsw0$ܡ!8LA|$G,F.dA.Q*1^]ZhwU3H`ߨVqʫ)" o@q-btǣ~Կ>l&bx4w|w+2i-Fc> +Y~w;ˆY@a `5F ;֔xo]Q J7m U=1Vz/t8/0}oN7`Nk[Ɋ_d"͊A Aq#9@F Ķ%-%:]Y$^'lS/J:Q=]s`ì 58נWQn2F^ӫ |w`Z<Ԏkm4: ~y*T=cRQj N.X}U q\)kܺݔĆ tϙbTC\Ҙ%h SDQ>+[۷zT) Iyǐ!WE;[ ۗKM Q" i_A\!b/w=yaKj|,ox@ sTֲ#J;%ѽn2ڊ(!QM qe|ag6uB7\4Ojl.B^wW!v LaFP-wB!-BU{뉤Ww ^@0%i7L)RQSㄱ=5s?U9 @uHBJa|&"MN ; " SSΩSwp"j%U;w{jA 죻Go-") "3 AɎ`V",*X, +|:8`m0i&ailP*}X` M5dyr1Hz:y7@BWT1H,ӆ|*\ROfE+9^j9Js.af+3!$\iA&'a]ih"e, Y?&dHQ2NЎ c6~6g?ř)Prϳb<@>ϗ73E_~9WSqˆߠ)X|#w>z6/WQe5uLd,H#ў#–k;V~.)/g͵Tj~[;YQu #f`R \01Y)OؓZy O+#)OhyBQ@xHee r Tpbԁ?|`"v&{ZCOyB3V#Oy>I"A'9?Su\oy$q-*;{pucn͇E<я%:6 Y%V0q?yW7f/,nwS0ͼE[ #u2;UH%@zgbXڝ!8ZLd+ѢP$= 9 ,Tzִ r_jLTZnzYm14O0RrEiTJzmY5 F+K}Eq"IrG6剆zabeXL}3'!+pREX!M/ː& W-1p&GO^ӫa_LrޥZ/ԓ(W*uV\}޻V/nBs i .3ۭ/PlM澗NAf?8\w0фU/;i7b(FL&V3푕VRE\ O FKOh%c 3'Īq\X&M8~K .)dBWsm/Ss[ E[(+Z>}{tX"BLɳٟqCct:/CԨ<&%+UHGa4U iϓI宿G'}:'w_.2@RC3/jH&n6c;wREveX5`jCYdjmjYSS-z̠Tٚru cSU mO[gB ɯ^@+jx ti݀)"m2N}p3E N;gv>y}t~ bkoo=܏&?G>Gi`{3ⳙ\\ˋ/bURU廛3 +.$rؓU XF7=L8sIiWh4s#-J?9I_M8┻S:O/ [tZ2F*Q`RZ<0,qe5 ]"6GWWnޘɭr_j##/auG_ye KnUR1 j&[[ՂozMKԁJ[V%ظ-DҎV1.$(vQ ?Zq=0 !HPhFilaFK\*bÄ . ŲCLH^ZEؐ]  u5ﳯ(M8k!F$kf~WuzrN+QUTٽ:.Kv9˒Iܢ๺0$!BI`aHywd1wa&#wqyn b:Q:ons% 5oGNJ"µ-T?!$ޟh-u{^\)qW*Z] WSW@m}]7 LG#´:cUEq *k͐eX&*6˭QI*j\^B}ʹrՂconf]h|hq]s` j~-eڽ .9{L&wnUH6Ň~Darԫ_ :;6׏_]]fB]~AJ$S#b{Q oD \;küQ-_,u52lB>Ljچf 8t%1`# X( S~qt(!Xpr:][o#7+ vZb/;qblmmdIQKs9߷֥ؒm*[1YX*-@C~_ Ѥ,_jptb zArw4\&k ʔ -\+Z(>{( uf+j" [ ﷧ 02_#(V`[10Lp1t)BlDEsCXQѥN)4/wIϓ{C.,\2pUN!f L򧎎+ETq *d=:{$ R^%Fʵ2Jp{3*Z%X\B/SǏ *TU\)fC̉ijCooȥS拒4PaDhiF#cNdCuP8( $E H6YДBbJE9@`HRTTR [K,yCy&5"3=h-U/%~<.w&5O(,TsA@DGFq<ԨBILGZ#e0LI IPDHFH z a8 hbEH)P:%ޡZ!+rct¾FuYܲjw^h6> !d)XV x2' |z?{3CX!4IQ1Fڀ7w7o/w 5O[.[:JbߖeA]'ʹ^QTC5ݕ p0"(0]0<[ča]쮏Hk$Ez- 5]S[!νsnܻîWdr,GMsRZC϶CD<rQ=*]4BťbӉᄳ1*R.S,&<1iæmjҪ(8:88%Qю!AվT E;H48s 'Bk^.vܕd*+YZВA,9A:thOBsV>Eً1Q)9,!f5cq-H/OM[-?sr\PNb$C0b$>SC%29X%t3B]/Vunpa">8񧠏fm][?ŽY N0w}>Y^= ^oInfbʓC͟0okqw/2<x4wǪɓe(So:xiwR9?|6߿ZT!1z#Jpԡ߳)Q ۱xg# #4W 1l졭VBJVGo+\ hE瀑?gBɾTx{JG!ب Ry7؝?)0m'V=8%1.g{̅8wyrch{}3P]V@#!n%_+9DRΥNA!]~_? (! G#(D3ᝂ>ZTK/UT*AN IG҈k(1}mTs$F"T\*[FSXn*43']Xʼ?!MH(;MƢQS}κrdCTxS_Cn9-I}E1~)nD8isK-m9)緀NHq#NzplG_"'#%57YogCj뾑s| _(b}v:r!wI[A\Ao7f"Eڛ;V1^2 sߑsh,"Ni/8E1 rBt.4K'kBIbTVՈ`ћcHoywvOBF:MBt)?"HθxjZou |4"E!O Ek*M% #5i($2q!O͎%ϥ7M^Վ.=[|z 1%^ "C2iv+(ђ(鲖b J%'2wyFK?ZRw?[tڥ.ck)ThlH ,H Q,&A(L"(9IB>*vTmUhVL vQcGjpe;wJt\,Wڏ{M'.QI'FK( $EHU(%71h@ަSLA_UgQ0퀹ZǶtFH"J4Â%44k# NB#"eL ÓS.L\M&:1X 0 SA 8F!$AB¬dV":U JrNl|7W{7&MٕCjPqǡ )Ԓy\yq=HAS  Nb ¡ aᏐT*RA8 U=^ph*FXR8 JB6 U̔b( ,EN* ڴu=_Xnm=Iv1-N]?l맽x U31:?cWIGn>&?~>=fMm?e~L0g>~>vWyyXr<7jb?~d޻gr2-h|E%9*bI萃Ab`2pԑ,׷1'EyŌ  f,-?O8hG xP4J~6*@82! + 9x`IRZ68hTop vd:S:3[Obk>ݝ %82{dc$~;1Gؐ0e]jvlU3՝v(+(ؾuv/ ..D!ZNtfV_5[ Gσp$ AČD`KjAӂb^ a8 hbEH-RFT_P tN-ڛXm yf3k`CWd3)o܌fY7 ߪ) L,&~EC]^$JɖK WΘ 㯷Iz?JS/& 7>c7؆$iQln.\Nwm ^rro⾲V+A=RZ_ըH;q Md"XÕDF6yrmQeN`ZR!hF Ⱥ1KTR-Wj:E" C36v#3ۄC[Hl|ۍf (gwwnDƃnF/DqF:\cC}Cg(~HE3HsMLd^?:  dJ"((C0qDb \h#bI<5X c6+@N?Ia}/e % &؍@$\V+LYP`9jW%uL=I;$abP@2XA+xS/M¨RQ5(2aqc2]QQ"(0 a<=RI >1*}f!&T: Xևf\x*jה/^nW>rօdRcVXZ .%Yx7h Pc4`K4Bfp]aw)nB65 w*# ϣt:Ez }+9EޚRoL|_n!?Md/m-u3E֢kѿͣO>DU]]ï}K0B{͞Ϡm+)WX֤`KnI9T._[0ŸĘ4TUW\ZU@X㚫kgPuh{4Sb_r*`hWuϖ%5*:0[03AAp z^"͛夑-nUHv( gBr OL5H|Xջ癉VybY Wz?M&}z˻d~U ?_-66\!h[O]w-l[u86;t\VC<" ?&vaM`o &q7J3У.FOH|iV ťv+$;HaW\,E8\f6x76KϿ+C rh 9pA*Ev'L'U/&Wix.i :oy`D!3͙ 26iz=tx;gҎVaJ/`oMh mo;,/{Fݹ ̦3.}"YZr:lSRIfYRGyp:!F`ns=sCV:Z3-yeD\CMe#\1"5T[ֵ@R)%(n]˻qFª^un!z(\o[>aP(g B`"˕rW7?_ɂ'@c{}co^Z|ZGɻ~`X,fooV ޠP*8XG I'ldK(!c{h*C{rKZ&A9 ՜ *')ӅoսA/Xux,= |0=M3[yC`h/n8-v7S#R `5!ss7$_8A}v?}59iW?1G/2<ʙݺ,>]JPACvM(P|^)33uXwJ-9pwaNnW_X|Jg - s!3;ܥJa'>!!99|=dK#p3 ZLXvK>AmCԎG?4 /!dH*]",ai颟|)gaz料w36.ܫD?ҙ_վA~ .!o$Fjr,c~zEպu½:[ E !j![s}J JpֆWT[D+1V5>5JI- 9L8=A@7ŀ2u\@J@UPJ#!*[IкURTW+ arHvNwUݚ嫼^/YyDͭտBcW;m^-?j+ *K}|!$LQ0iMnssq}\Lul3L"BkpA"{C{7}֢?1CQw33c{cDZ 4GQ3MY6m[֧7z,独75?brXX7U"MիUZX.FjA%ZYeA}5?]\b3*5*@u\t?WN}՟*pQ`yMԒJ\5A[ a5=\^MRV$0C7E/(EB UڼjYˮ.&26Sz Bb"R[7}v=U"hM^Q(C!CzC":Cr\KuZ"$ 'fTR+ƁUV+֍kA7=eN=O9þ}L0ZbU/(IB94MvCY" jH@rgQ!IY$<]0aj,#(RnrK#H sS ) 7Tip]R-)cy:;RHťkM1@yx!EaN0{ >kgM\BpqKwM:zpVY]eX:n3g0B5S"LO%9q'b A9gyKoDYwٜ_)I2mNm1T̆w<ކ>QC.*qUeXcLJOjOE=|mOcgl0DR!A+zuŴڨ/ y]맋f1rZhT" Y9v_NEktt!,Fy"0.7鸈+Ⱦz-T(C$t-磭 Pf, s*Zm=u*EM^p8QB&N訦RYN}GRN^LߥӗgsX7n 6嗚ݼ8@)M2ĭA%h&Ø@(Mp@UIMr F¯wyp%;ޛd%^ ?O.v-Z빧.sO7Jrx"ӕN='wF@9  6/{Fmcӫ9鯄Ͽ\;.>t穧SUEԯX6u B6T.0n&f:}#Ҽyiw+e H2gӌV,fL iwC +s )V~ #(,s琴2RZyɐ *eS+nS+1@l 3Jv ҨR-ΝdyTD bfPcAյԚGF@"IKlgw(QrMH \'jt QN+iHqr!%%IXWLPX5\ьG3ߘ7d!vB@F{|k0ѭ8n=[3Lǁ|4(&x$޾4F@.<_ō2E!|cygTS>| g|Hce̼!G\D=\G 0{h l#߿FdKvW?^eRa\[ a9؉Zrev.xKf+CfM ą \RMϮ*tTS49dR߄?L8!js<ӥ ⴈ%7m@JeqWǴpE=;$@+5CXKƪQ)9aPTa9N*YM*^VKkED\D#eN)U RS[`F"iHJI:rRS?;n9 eHR;뤴ua Hwo/[sۡzĽf?uq/o3p/Mi|"pbJο> rz@9ÿ2$Q^|tAՃ=p2I)'븽[^ŐGGͳA8P{nV, pZt}E< SOU_-3jiok/kkN5MXyzbס~;+Y?O~]I|xʞ?[L hjgN{UMn5A'Գr#7ADۓn}x)19b p{nBS!w#ҹc .ܻ[זXS{Ny.#v9.e}QBTE$yzh /徦1vzO<6I{,XZ7o*U/3x7f]Ǎ?A3yÏQͪ}iPRPt6 dRL% CB#;D&[ƴ؍z)S}>%t54 azҠ Ea H0&*d^W~/ONۉ:֒Aa: waO%nU%qgDuj((Q5ue_EUyfNY%Vixev+XT1YZLTKAThT^uDiJ`HtHz$|Ý94^}hF4NQ Bh%sHuk1LI^`$3 (_O\S2d52%\HcJa}-1pr̞nr2k_&XXdʶgg`l^b ȃ-[ϭޜPa&~ Nǹz:*:]ˍL% GSa=٭y\N 0Slw%G6V.0·_딢D+n'XA`55DE`RIT%:DAVU PI*~Q*8V 4tXA]tNH][oG+_v}97X`1KD_-%E$SMR&pfȡ4[p/UEط:YX? &:XX&noUѲi!MG6c)=)SҪy*驴 .e~ҥY/.=1td*_û955%x|`Q>T|_ k ~,Z$;WfQ[2/625u߯,& ~xg}! HRa-,ʥcx_"\+i^Kxi{O{\27zofbX_lpTrRsc_!VR!3eF\DӜqIuv. K?J1??PK3ugQ?)[4hhry㏅ it' u$u$u$u-NMAu4VHQ+„%H$ W#31&":woџoS.Y`A݆Ֆb+91ebzV3:\ڍa6"Bax`;4 \ Mu; Z &m`[(~/wmY$P~}."#mvZ~Bõdb-xMfFA\|dE,fz~6|ܽoMIHyÛEWZsu93X*ø*\YM}J P# >+]ț$ y, )缡Do:3«\{ggY ~HpFN^ i3c:r㱂=Y9"W[]Cg"GRE ?-ǚIU)?Y 6y11"g^1dqJk(4Kc!Z͝FA ) edm RhIǎ0lkow[ם9"=E%ާ| g/WVhcje*G KN!HQ%:v #3oŚhޡvT8qR㲪]K9 0J:EiQDml #i ~4>5wf 0|#**9~_rfĞ/&af3LME&h-5i̤$HF" P =c :a*7Ƈ3TH;|aJ= ukcg x' Z SL92(ƹl<  ,.J| B5jf&2nWvz˂宩YZR U-}PZx 9Ua%Epi-!(i("D2DUgГu>|815`[aݬ4ցoxm@C RE2v,];~Yx(i$}^^<= Xƒ3(>oDIwQ'=<6䳞 %Ms臋+-8@O7?d%83Ҡ$8eo%#4ppi&ӿ/T3S#*`08񔅞I__û6Oi*I>»饩,;*qL)m|wh'`UAc?8yxUgvolZ.]0oǹuƭ݁c.B ?YYՉ3)^£{wz1WZyos8geQ{׫]Ѓ[`~= C,*h½,je 6Af=e %Y؄>7w9FQiQ)qxFYJ ʸNG ćdn7~A{ >9꧚(~DH^s !9MpFsP?5p`僔|[\  5~QRJ6qxHIVOosQ\\/et8Y/Wwt5ʵN@JJ68$jUkTcpͷ.8#KbˆٿgcnX p^r6om1ZWoF=+ݕnۻrUrцoʒkď]˘&2woɊW dYQ_k n3x2t}*wc@kt1U]-.hfëE*|ޏ sdU`7wYm`鉤: u11"6G(v9+ߤ,.~*W RV1})|+ڽ] I]ߎЌ-)XLX% &訽7Va]lic5kĜ@Ţ孳XX%p#X-q/+vyge鴽2ͩ3ń\(Go\@}'!"5јt(|oX$(.qhXasөzEU]+({yX;YL2GAFX*A;]IkK IDtDtDtDt\ѥ$‚Y䘊5|YtrLt`B A"X1BGfCl|oXpuM|%&Go.ât̰7D)p0#5HES!xZ֪ URd]y$T g P5 ^z ?).f-gcu8aIMׂ9a2+uω^ Ij33: 1g:?E`BHQ!c Y;h ako +B)P?z~ײ9.I;y];] KL<`-4LlOÙ =Ťr~^wxZ"$}sg&I[/̴bLΥ71ͳԁEcm|lnoczr~SW\tQ0@qd Pmjr.mRA&J "ݕB5>`"#ltrBHkoH3&҇;^XK15M.Eqۂ7uEŝqUXT p+H'ss7$¨f.u  }8*FEnzTUzD9zΫ#vp 1|V1QL}D5g1*Z |H+!M!UBQ|VB뎮N_fgFD!M[a䡵[*T&u#Y?HA9./z#R7_ۍ1 y SSSɨJX66Ldmk-9Uuy<`XdzsU*LL:pvPJJ1BG m|拜3J۶9 TYs&;T;}vݴ fOWIlz sèr`s'N]稝ͤ$7D$/cM?7)SHͧV%\N/]#W* )5u,Qjtrtą&TC+ndmNڶIڢ#l12Q  -$B;e^`W<9MT<'$l!rRvUu&EY1IQw},/z:Vb̏io۲KJt.[˫.ff䄦cG\s-߷[C5fR(RgN{E8"Mo&U 8&R8+^zc ]i9]OTZq.&@o?"u8%[JvvSO.g $)]ۚ{G@״KWtpC hg9_4 KoEW'Dጬ*nƽ&rP(Xx ^}#њte. Ro|`$):[!F@AX ŝ.PΫ"lGc[))IVNUa4-&Bvħ|eEؖ6^ u׬4,I3;Y4=:mb4:qys)pҜEi\.k(Y;IBk'Y}");q *܆~o=)ѽXT)"wT+B:7Fjʗ֎qxMD~-GC }")t$? ^Fy)$-K ?:=BS0Y^0c0q c0q9\ ! Q,+  AasF)NcY4P}݆jޒ/"H/u~O)FT9;^~o9nDk N(-9x0d&LDY[PCBd:(ΘTH z+"2`+,!-6TK[m|Y>8ʓj.*\ډh\.smC -*:n0<R}@!-IL4#"A24%*6VŸZEPNQ)-f+8< zj"\Àx`N,n콢lӜ@+&Yh0-x= RՑ!"W3@#rQ$Ԃzm h%֔@IFZ _ φf2*OΆ5/V6U{] QrRrk29k,[(C)͢Uquc# G㌋b"5N9jazPL^u}z`f؇ \.J9~N^Yz[i'0Άwa8/ O7ٻ6rc%ٗ *Hlm*xw)XHIHj^"ș!˶(4~ht7n%XvQ+ 2rU}g↝_ŎM/@bƄq4)㌫jEqx.ݷĢYSZi8>F8fw !|w&qDiߐϯJ \R8`am`E?!7WƓx/;*soƓz.;d|}$pm#,(/Qyx',.q,୮.pP!TU!A,> mЈ[d ZqU ^,$fr½ei㦇2%,i;>=#?@C\3orGiRXeD /`.?T'{2 Ќ1qSL^_]6kx:7Jsb-kԅWǎǎ1S* NNqYNNcxMN|Q"'Nq+LdKq<ٛJAbλ -zzcc]%;\ a!(<b_1*LIבTc YP0ZcXV[Tro֨j2;.h(zqSo8߮V41pU~*bP[ʅbȌ. /AwwF#Bm$[O|'J)Z3YA89'lFH|0?|֝ϻ~QJR*BQ)_vJH0/i$t'L A@n *ق7ٓлSK"_95 zvq63PD{ա,-qd\@7^4)#`}r,P,Ӟ3e8BB-nԖB225cz.wEAvזFǙ731s<b) 9_ef߂w6a$K V2qx)u2ԉ"Q ^߈ ãH2B\Y-Tcje*Gy%u"tLzy&`c3C!+kd O\eH89GuJtMiԔ㥜RPOH5$spxg)98<(̭>&W)X yi&-s)rXXP$IDbi#%{] O05MxBe]r0r1χ1Cֿvgmq'f;liT™` ӵ Cĥ5T4&"N)sF^ѐM-&!\ "iLsD5K=khEs&ne^n#QŰo=DbKYzF!poKYM LiMdΐM .LqMkC~rZM2;'3в R̸P[zJ4 rnssq3qtJRc4GX2pY69&ͮ:!EhWÉ!ٔ)!YaN8hL,qìE9ЃA mbB<.<c~?x2)Svh́ cY1{%zAs4\+B ! P:!0XAsTM |x*ATWp"#FJuU+ѵWk$Y,EQ_~$HOA=ё!}^'j`ۉʫف%0 ]K`eM{%شL*kJ)L.lBaF/TQp,a9X!EF^,MyN$ y#HNzfTn'e+PP)eiX:rզ=NG6Rql*^U͝Ts< XG9|acGN߸7rfro}Q95Ĭ::(.$r҃CXeprQtu iHsR#}"X:j5BgeDn7=Nñ/%ςC )05Q9p0y.4&.[O{X2h(FayC:6v<{X]&1P^* KU կK{Y)ިǔ8W~[ko<9GQP,YPvckP`I2y\籩ՏӰVhj]aRb4y{Mh%R7kTRΘjN>nIO%;v)ꐐ\D+ɔgi7KBU Etv;]x*J5,n[h%5F09 ۭ*)S} c$RiVIvCBs$S_کJ(.Uxmc4m@u. P-iZk Kh#΄.|?5T­ E b7 5΅LgWvA9b.iv=.;dG8E~ ^-FF{ʟ'H>WdszDv0@ &ӖLilKeJER];nJ(t<Jbs5Y/B4n<[">?MČgj_'4*e<7䜨6qM`|iҿZ"P''A"@$7P?ukcw}xXS ¸![9؛֎\Z$5cԔ4 Ҍ6tk -Rm8 .m)!l%Ļb6YER\3­)#bĚn~['rvMz1rz nZM'ꁶW 6nMr%ͺRʞs4E\y߅ixP6C錩@2 $^ zjrIr 4ByJRh55Uƙ@X%zS0*UB-{x ?>V'l|e(?@D$n4Ќ(|Yg37q ޮzx6? ^awEWY kţW~QJ9wHB"Ӣttciq ȸu rfRߛ#+)tu4\X0d'>ڷQkF}mYkϞ ĺ 92!y AI!C@[<y91:#` \WWWxtSWY YUfk'waNlW`Kf"|;]Dž8 Fûu\f ;YS]Y߇_A5bmfjlߚXG3_q㏇1 V#KrbN~4w!cZɳ\乥ɜlKs0%S19U k1ȧ_Dr?a?cbZysJ/IU~^ *@PFW#p9gR} aT6^U7>|o&9!_R8.']kw yeiNmOW'w[T+jy JiWC9cr\P63n$\U?r\+hgE j7 x%G$H &-st(r1֜#'dS\)aS׏_!+Jw7L0ucDK0q!*gnmڽ'&GJ@;LyFLku9u$}3:᷷{;6sݽ߃s4p}i3s3uUe4˪Xz0u'$"%Rr~N*q#\pV+YEJ;QG, 2Pc \=SQK?6 uƐ`f-ʁL L[oτ˭e:ϩZVԌV]IF<*KrAeeh@e\|kiyDM wzÁk#X) fv:rZ3_Ԩı.E\pzRrcIpy..`+& =RLܪA?ss셜R,! ?\u6ьAO3:Hf:\c\:RWE`BHs/ 1CZPn!,@᭡yE7kT+\` ΂vbF1(3{[H:L8 Vz32̿]GlưD.:[ z o$[O|'J][sF+,>$"4ൕTj˗JS*Ȉy @V\HPHl=_eۅah(GQ!FpSZ#ܧ`tQJ&~.ȭyLR1;{6s<O7+[6s]p Sd/In'邓[M>'e3CJB1~[e$snK5zp}ܷ?:o=ASL'և/AI:77uoǸbZ K)<lz~: #IO@$FdA{SbMj| ЖزŸ@b{ yv{D"2x4J=N8(U1iA >-]9܊ t[^kK,]2S3VR[Q`T(uz'0%V Ǥ&e:| y19oD@;hS;zh= 6+鮈_Y~DVkKWdTʸεuZ#}@nɰ @Yѐ . \V6*a p3fB1cCS7,6~lk3փ8@.V਱)sa;͔;앿>8h;I6/`y;{ u0Bj{6kCV(vURnѴ6SpkYh"AG@ HkQ},8n# NE7.І,P׆n  !rEu#J 4kpN nOc߉ XqםT +Q=.e@v6@:4Bj@r>XrDE$رz3@Ta[Wfvܹ'0va=xXxJi"*q V ;7㘋wCcocT_ykl&Nݦ~{qjɷB}peϏمo=~o(|Vf>懗81'sl)y$, H[3k;)Ā4 LUHYMç}\p4e A4aDJ$=NQ)&yg^e:e½+kGt87^$/=s| ­t| f+집韎{}+\{4qMȘ̂O*t!$6xcciru.ޛ7Ә(ʛa/~~&5xn}pKw^ffW0mKxm{7Ǽ˿/*YM[( ΜsYG&[&/,ZRs%@ۙDpgAHU<sq gTΣ; FXi@2d3{t!4&ǏˊlS;gc L9OE*=ʐ8 E<5VGwE=Uy0\b>s\aģB uLqQO؂GwǙh<%9Ǵ;ÉwXH:cT_,PMZa1G/$Vt)sBxHtTʴ'R(І. P  Q7Y5#JOxpq{*!;+*JPxJ#NA)L o^?11ų&DD qc11+<Yc^?"HB "fke]l iVD0HȲC&n J-ڍ hV,6]:͒3Z7)`犑+gP$ŸlNSrʑeE&y- "(gCd x,oEUN\7q*j[\!IJb; +z X f4hUNOVKcoYEp>s8b H;bMr];ρ뫻pƊwS!tST뽧zT*LX=Ws>Λ2aEyAvy0Ooxa2Ɖ&ri׷PmɎYѺm|jϪJ(ktW ^JJ:`!0KBkV ,( X65/,}3M˚>v-B6K )Z5砧6M!Z#YhKEklG{1O*fw RTr{OHsIK=d]s1)54\ۛbb 魟S{RZq ýcGW+iǟ_. Bõ%f$H  0aק]*B@)t2Zn@5쯾 &I3)G]$ [z7,V9[eL>YԚՏi06Sg%jL=_0.jA;:(j>J[Ʉ0y Mg2N WA)Ɲr;2L7.8`%SQCT6c{2ֶw5Y,R*0g*NߟNT=N"Q(^EF1YN(EBZ S }Sjondx{W\6LFLxQz%( vAp611f8>mFV^>sq]#(0 !I)F)&*N9 u/ 'C)G'`lrn6,LVԀf06Sc( Ţ C0d`85g '& 13H72| 59!D ^a޳0xܫ%Nw5BŌ# N`;T]HƺwCaF6n`c h  T0[&ȁC;(v?ޤiۆ M{`,n+d}ǽ٭<*[׉&ވX‰o\I(5;óӭ ֛Wm2EAk+ڐ4?J#JhN ]pAU\0 pVDlŸ"EHoR0 @dNX4Z( 0O{-޵q+"D2x|9k#tM6f%7[li4ۈnţ .~uao}}]Rt,w#f` }X)xŧ_?xhPƂn- 8"X܌st3߉FW[ߓs8KV̝Ν..@9wyXQ%`籪F%b4@K?gQӐJQI#XDh O)Bd+q|"m3=AÑLf#./z. k|ύ+j5$n Q+gꄘB5+ꂚȑյT]IO,N{j) ȩl&d#1-.3 ~Q3)6H &6#*K RF]mA&+m@gXl{GF^ժg{92ҿ##G(z)%+WҟRcdPdۙs195nϫ,Ef.|R+Y*S#9 ?FzbNpY^ez ,z%oP i!bߜ.bͱ1!}>.,x<сf,]쟋57za<.c3\^ wdLUsNV^+Qvy;(vk}?o-uj6k]em AP'٬/۝1;ooy`Yr8K]6Z,j. MB;v" JI: u/;ELݫO 4듺Ʋr?/;: h05|Néu[v!0x{i#vNOn}z7u /a>KZA5[Pl:Ҝs /q}Yi@1*J3 <6jbBcp?*T|T@QP2RmQ'Ikvs+DΦCrZ[LtYx|qN0QGTQoCXmf g4TC\^@(l.z3HL#X`NՌATsm±he^OJzQA8U}BSuQ!h.騘PL`s$ړ bBP󍞳 N NE8c5D7vv-*$D;euⲺy@ZWsGD:3Vx{w*t+"5LeZZj[ i!:}K5V:u&[Jmd`qpsoMe6%k׃-wj.,=LR0zKs*WWowJr3t?~Bo-_Ų;yb[7ݦ>;yv`Š|=IWܞӿr y̫eۢ63Vږ$!\D+Au~uϺe[U BD;XӁ?u^LhꐐW.dJ[墍.t n<݁ۃ#Q&%k!`5P;Swxa(y2&x-4+dRHN JIF4w(/ZN0n/H2E A1%_?,K  $aVÄk@r1ƑVfL3q(.ʋϏ5& z 0 pxMZ8n+ L5",PO. 8|EA,94Bj!"P"l`$9hhwX?߱p3=exSͩB}s.VQ`4 PdU. NU $02kTb䁱kEL zߺI`ݪb":UQƺO -c[g>[EL)Jz3'5m0wY_:!]\.5gS5,T"5X~%U5\VW3.0-UUc?^B v#q8E*{E&DGoWN%>=f?{~p 0 |Ko&cf7~Arf !, ^W%B*K*ӂ&lMC~D ]2M2e iL`ٳk5` ["q1_>ʞ,ӻ<( 넿E,cQlhtQ -vԠ 6T7Fk@%NHLڰKAK~6,rv/lgݩз3 g*ϡ%# F9ԶI޵D%DSA)dg[3T (v?Lvu >!*630Woa:v> 5 ZcUǛXBWqGGE&9jFN?w7g~sw>O~Z6^]#I)PKL)֥dnrBːa&)`d9{;]@nIU#=dC8b26X cAOCSZ*X@Jč<92C5ǎRXn;n )MHa/u_[Tޑ R> ڧ FI̫CW#*9rvMEXv&JK^g6ɦ\{ C3G[f8.T2JQVAQwl: ?6֬Sim(Cf&R;lH ] !`jdi_i׈ 8FG+(=+f쎅}Pt6Xa0$H +/H!>bu!x>){fCL'{˄zo.#}qJO/ɮz~Bbq7D$\|o}.{Xx Ɇ͉_g$e^4/5UhRp1Cp>7F^J7 ԵWS7WsH@j(*TDpȘ8Ѷ/L$b^Rؼ6T1Ŝюe^E2 \ׯ}r//: ~q ~=_eGK΅f!ZWUxᾮzp>a+ gԃ%|qIe+rgvPf/q1$&jLy |zui4/0Ⱥ% lxݎp=^`wƬr/|6r'nyiAZow?M0t$j{21T6U,"iy&6C +̝"]P\6~^}0Ϗo6M\[Q[jP;SWlbYb36aH tXSnӎ418qg2dQ|{YQ0"-`rDSfYfXhUI`-v\ 1E)1kYT +dR5JmJ F L f6)GlsZK'rV8oV/7Wo㽃=v3VW>֓UVj{[eu=6R-g R]V$P2"R0#IۂObgYGR~:q© #^C' e 8Qel(Xb$B;D#m9&21QHaW` 1Z;35s, (W>a:іd ~$0C;I2fN2`[<Ŵ ^Ѿl4뮳 mRbeD:$,hDRi)fSZYc*K Ys<|rv9Y{Ϳ@ӃaKRNO~xzERo~O<}}"|9AD 38rͅ W-3I˧875P=L'm0! Ew?`؈S#v5A<d#NFApBTwQPb}vٷExn)1A:?7ׄa 0(4CMu6cf!#Y{w4ptD־[\2R؀tIlPJƕI*10gbaS)۳IAasg`粓f4Ͱk*7;% i Rcxw[jB}<+';%#DVٛ )$]f2 7cJIA*u8eB,H&|C4eԽ7UiS6*cZIT$NM̄츃PqƤY:Sa0<-'z8Gu3T>!0-Wixg}[^ٝ?9ޚ ! I.)=%N?#$Lbĥ4m# /pCu8B1ahyCQ&%k6`4i:'*3++Pı m> P֧@0Avh1w`5#ˊA;^F19]jńRx?4Th-D~+ )/5SipLG}&mg&)1fR\"&ЮP!(~"uC߽CiXL>Mot9ŠͿYDjUVؘpמ4o| vwSp&~+p4hd_:,~1` s7[tpܒc0_c|v˵Ay2p$yXuQC&;xP~™WʧlD$퓱|"U%i#/#QJ#`" bM"k<$+^1ݘ'y]z\"o@0='Ca/3,mHz[MxR\e"J œУV7M5]m<%(jϹ8D,…npA?}%E}.SK(wb /6Ĕ"qS߈[^|ƴPKၻfD򕜑J(0CmxbzmqJ_^sڤ| D/Z"Qx1fꪜҥ:VXi[Z ͡oI)d . j)ռypӀrG$cb9.WDhBs^QnSYdZ (߯]ZLA~I` cs$BII{ 1+Jq囀*tLQx+ ʦԨY\H[NPYy!=H j%/Q@C\"zLz*qj,;E^u5GqPpI[a3g)vF!b 5F[}ʙ@yzTdCo?NMH .yl!zRsi6B.o^j0E-?z)(iw@Rz-irYpc`;fO?J!CK _.~.M\.KuͅZaQ'X0[4Ws{R\AL;'wZWSAW3ю> Ғ2_zBeU'j٤9lecKʁ N<[O-cG(n!&Q\?;bUDDbW:{y%RCq] 9mիvzd'Bߺq}^B85(֋Ll(>}XSds`tHl݆:,!m@֒{JyӜ)&Eie{5ؒ+<,Z(('w'(BavPUoS&0QT=OF@hP'P``3wH% 3oUTAHSB1#+Ճ .3,7U{*3rcIpy..`+& )&bAnZH΃U<>7"2=:X4L\s s,#j`;eLs8prZ'Vk@j!." n&TwPST ,[3u]Io0B=^BAӹ(((t!20jDL!ݳGb>] .㙛bSRB=?Iw_R/#/m/ [>" 3N(}X-<[ڙB}o`+d}Lth [zI<_F)9ɼNZQwvuRsBoɅK~R E'OEwu/Niޕ_̚%驯ņ`@wqbxzvKw b4mH L!m7ܹ'[khghۆ˶v\ j1br(MR Fh}uX·AT q^„dխqqe^h~,G:CGs2.,G-FZ= +M,Mɛ??$_f;O?\; cNd#1ҁ#̫^)n"20&+۪wG1O`|f2h 4Hhfp>|Y v[I6k$MHhD_Rs:xg WXk(79`~575Z#\^X&pJXX͑;0 or6J"Ϲ,'b >֭A9W<#ֻ.ߨG8ZeZ >Rek LۼB3DJ\/l4,0y FqecAqN] 0Y!CX h}`0Rcme"] ]19$&bLJqd(D W2_%J|$c,1u}jrk^=8s24YnaJsLMaM;>t<"LPt[ܳdFL)U|Qow7/1; uv2䉪/)+BiE:z̚BR5\;;}$1=ZgҀ;4&g}ȩw?1[4Zq,ą_3ː/8W?ƹ1ՏŹXgB* K.P*rt7\!_q<VT|vN8G};:1"󰼟Gr:W&Oj -܄~?ms&t!.'n6Әk<'hVr'CoP$f]%[F;֗1<5+}klCy" m)VyK@;׭) GbHU;HrFrĂp!wqBcP,pΩ2& 00C8T=m$a$(gbぷ2%?ceOnhMJW:eUrӣcL4zn|Ft";*NypӀrG)`ҨKs^QnaȒ`%!G;| XLY5ΰRr'1ۣݔՄHiJ\ƶf#\~.[AG(K=?S.i&V{[-.%z?v+Gݷyv0bӍaTgsE|-le-ظ,DhBXae1dV)Nn7@,1zcx\z@bQ jr lM<՝$gBDw"yF !x+0Dw%F;P ؀ˆ'Y{C30F[:7*h0{!f>MVUxK}MGw5p?]1RgrZI|34m_,geR4>+{c@ i5CX;&H^֭͢U^?dU!CHF3lmʶLs-Y !`T$$+1dqgې׫^ 1ݭ7OI_9d@ nl'nNߪ 7i@G4+N3;2-Oms8~_[s>`X#o{L׿5OOlBǫ7%/{'jCl.=,KtMO3OG{DC,hB5P%o͐P.#p *)+w8DF `Sxp0)标-Zata~~opn]dɐP|CUr x'xjE5?!^dEft^I]DL]s ,UBў˾:thA]bDA| u'(+q_W5 BaN݆Q2a&8H..@m$aĵ)c9"`%$0 PRr'L0Ea8A˂km$r)ݻ,?=PrӤls?us \B <)]v=߶9ʚU}p'B>Wq'q'ZrR% ;FqpR"sM˽O9WziU,x:`Wn,^xLo$HȧJhԘLYͮl{r?AFtduVǿtSùx9>y62ߏJf'x4d< deBLJ8o,u%)8cA`Ӣe`R8hu ֍^$޺O '\.)8B wMDURLfiw;)$|UJxdnwo*`e܀;/[)[]w9 1=;59-w1rr'qLv%ۧ{`=*cs'~qnwpq^C_ܠ9;ᤀH>CDRRLTצ͆1XJL"=4MtMǽ@4 Onp@ H]ZJ}omlHO'#H&6H׍5Q0tsv" ոK'?a݆~J#ԹŤwjtMlhS.I"aro1cR83亱̐OU "?,hJ؅ou}lnfY>_[wj_mkBwrP$\w]"rXYSK\?rD=@0Fl1Iwr%SpQvcD31v@L7dA"dBѶ/]JݫOEvw ȤDzzxaBh%8qtR'lFŤs&' `1* {eudR; gۄn؅yw)Rѷ )SRJ]d (DYB Daԍ>6XHK b0E!vcxL:H '0%"ӂ&.J&K z=L ELIAഃma\h0uY-Oo|'n&j/(LͽYgo< =x l^y>Ĩտ~5ev2؇| ̪8fn1v|d\~ue_N\N =138VKk'N;;-~B0#gT((ϸׅn}۟NܽK~x?}6c}[O[.6s픐e9Tb!X6Dj%,Q:CK̈T ]0CczG l=`JM0,#Ӊ}֫9U.s0 &je\3h"]QȌ`%l)[K2cx+ ˪+:ko#YSYei(8pb+íYs!Y5rO[VKo| ->nќx$3fxw4q ato]aU x?kTZ,5rL@e!aJRs\9#ֲyjOne¨R֗2-~mMĔ"tD\%H0Ԗ6 #Kc',0-[JQ8,9&)4j-+aY3rU-~F%y&߇?ӧIU1 ɕ Q>n̵ޕ7I-\ðޑ Mzp+~unJF^?{II7ƿtp^IB  "`b}AH(|ܮ%IU[1Aڑ K+:תJ% YiN2p(>GXw=7@D4R"}p *: Q\5CiȀi0tcVsgo か(R4pr * e gQ͖CCV4>_2{,y1Nt"%gC Q KU ̆BfQ;^Drp(]jRvb$P%kǙζ<(ɭTdRJ'97 ! B 3%:qP"jzZY#rŴꐓAQa?ǕKp`O"P 6Bq ]IZLO<}SeE1g7C&xf0x<{۞vD%4YF-\g7LVjVn{\=kpyՇƫ5|/mmfZJ"33` YQk0/iULU-d䲔-yGŸ 9uA| Sz @mu@_oQ]ė>#?[~%i~dK*p/J(B'2IHJ\[v \Om,4CL% lhBh4[ooVFP4hX 9O,9~܄ܑ}w/7kPO8Io7E s*HJ̩ބ$KtfjS6Q]34Ju M=^޾=Z2aϑ(8x -*_|to }9#}d.u-yA1Us Ju΄)f(Hf>dNl0V v&cTu|X)CFg^KM ;ˮq9;cظ\*r)aNj$7QW ӊ$,+N88h 9r L(C[y X.BuQ(deN"`:\3C3+B2eH9-S-Mg%(ɔ*& gВhlH9eEJ@ox8Lf/VkAb[ 8,#9DFSr U0 U$a y5d,r`Zb31ׇݞvoz܀Y~c=P}vm-T+flF AAhL]`J%%+ rpXA( x5Pe qӽ5ܻ[9V&B;! &`9GEN5VQ Ylsi{ݒjJ.l0Ga V nX -o C$UHC SMRV*3B J3o%8[ѨfTPX 0Y`E`M•{utyB9`T~0ĊiD!lwIAZWKL8kы]bw8Ya)#3t k  4`q:e0Y5SQpF!WgtR~{[zZ[_N[*4W@&<ڸfe6>knu wOOUÃ>~ç/xAgȇd$0rwh"D Y\ӻ)zSa}ƟgP߶U>=3`B)d:6̓I81_*%YxH-e:YnoՏ'Z0^)}nP?_N*P@ߒYՁ벓~e3Yɍ{FsENQض x&&CuxBq! (e$ޭ7˛~vwkвNvQ}F ھ܍ez)uÏke2>/ṀAyLj!rEd[[7]%zU[O!b5 #4"xw֓qJ(Cae!ߐ&DɣywApJ[$_DR잞@hO/[Z3{^Wmǭ!bXVx`6WB5Ѹ= ǵI19[}hR%f}P:%8RQ{_GEߦ_*ycͳ >քw_'hܙAvߒ"C}3Gz]dRVnT#ëۋ8^7$Mݿ'K8VTRSB)!mS#5ZN D8˱VԠBr+,ڋ-sȔk,(/0 WaWLc}V#I4T)jms g̚*)bHfqjȅI#J"02fLO1EU=Y[R, wOz>&\\ 9{215*)Lv9բ (4\O7mzCAODwӓ@`RrI賎!P\J n0qh Pe3ASAvg>L)1|NɧHkR(ӨS&`@ 4z49sCԝ =pCcР R( eVNp:X% ߉T^RͲdm%ZwdHxIRifI VRH/-A;߹u̼䱾/$f,c H8@H SեSv+5Hq5;Tl9\U'9+}:v5:{83(̂#Iq I9-´Y1[Ku-HSfK3(\/TU)WΜ q,۫H3εfײj&cJa__rU2vnu<.a4[, XxOlH݁|en+h}Jiޘ%|]b&ݼ4w]$sǟӃ_BR.$Zb$^r?}Bod@/>< ~)V T]lMu o.bFC@`~+ R=P8;$[蒧4K|L&ē&v2N?|~}aA1Ҕ$E4puSev-u^n%l)BrzWOK&mZI͹|lSGC*Gl06r`PGF??*'?VrI6ek>,˧]W|pG]ϖyW 檃S{j]9?=fn Gzj$ڢ߮Ru2 [{EyܱT`d ˵؎u%gHc&Յ6 gP%ҠB Z zr)>zY<^~n/&b\>/wo`SNKp眼,p0eKM# +It:'+H S= '/֓efJDjՏh>2=θ R1MsfӌB `9oiº@>*;Hy 늒Qgqh{J{t%蔡KgH&C%@KKP/yrMg!Ir;rAF"j#g؛!R`EK,I*xdr^v?UT< \NyQ)L%̊Xsb"Uy!2հ.! k"\l9&qf%:4ȊS  gfJ#JT SYf :E&ʋO"pu `zaBQja 'a6*(V{%˜NM:f,/j"CΙRX ! /$I`xB, >9Bp8-mn&?¥L!S/p0-oi{p}>bzmOcwl<PdA78ށC{Wa1 jQQ!8:{zOlq;w/>=X842K&k/r2[,˳zS?/JI0am8Ʋ7 |8U>Mn|Nu &0;֝๜oa52}mo_'eگVB@ )ZI;PFtB_`CK͔ڍl(Ҏ樆F45Ĺb3P WJ.5@vNo0ZF7jdIjPD:ȑ+I 2Wur.D|̘[ҟcذi$2:T@k8R( %L,I ƕ~,'jAGjPդ 1q.VuާN:/$Q$42a\ Wi#s>-KaUSoVlbT*]8qex 难M|sYcLnIhAuk]TGXΤB UPY ~U(TQXpd5wUp8>_VB{Gǹ|ZD=kHǼ#`Fo7h[-io7>#@qsT3ni+&I i糾zašx]*X{jTOe;jiZpX\w TH hT*=G%bOkEɐdX!aPg2F;]GG<pV}.sk SYQH1 fE淋[WWYz ͗ kT[]$a˙c*V,p% J.p!%˔ ` MӖ<^S!R3NIԉkx/Mr6M2ukz!l"\(BĕD6S'?ʥMcZYȼeVi.%%`4ͭ  GnɁ^j)01>2؁eܜ\ka@($J^jQA{ӻ<pW-P_tsweVR.ɡ%E׷%k?}SlץkU8Bh|b]ms-3 vy?wޥ"ǵ5o̝=BC\*W6>DCܠշطnU[(>C'Mp"Z#u 8n]>D#''3 FyD>GA}>Y{|?VČߜ.S?f)1#aXh')+lj Α-xGuo~w:I?%069W%߹IYiАhU)b_d%ٓxmW fI0Gb1ts^: F\yzՁVkm_ę\Is J~:i. f]8$$L6B%9#$7B#PfAjGUu w/_ב!R8#ډ *IX¯N_,5NqpC2LODRA165 x4sUB+vC$M3FĐkJwQG.οV5D9;']z1"Pu}DM HXՑo&pĵBܩqB*#*E+&Ա+9%z4? sՓ*@ƿsrw>&Xn3wͪJE {be͕VvvlÜ[K5T%LzϊBp~{aȉB-e)*>͐`IJI,ᜠ.IUɁíoR@ t,jK{駁:" ,!'>G!2to󈜆&Xh奄L#sn1%; }"xCHO}#{cTQ!%BJ)"M)~ V_ѨOrWxs4Xa 3#]-f`~A}SR2g7Fkk|qn{c](޵6r#"e7mOZyJ&$8Ey{ǒMCJeI[,-θE.Fo A&Z`?ws>5 &IӖ>!*-C{u"o'DblP' Om8GfZܨ<]r?[m/c݇X}8ޕo/V޿6oV]mIoV}>y (<47ϗ&BgfCTȡ9L{,z?!giB9mrZ ; ޡUHx;Td9Y*7G=F]'}^B1AwUfOE1Hd/m)S͸%uX~^[px(nÑU5?f1ڔ3Fף Z/7kpU40%cR)=eKJYV.5Gsx?ܨKNw L`_)6wqdnEvbͦDrIEd){$ )1(<(m6@"+:*6O)/V o nͥkQ+'c%%Hʹ *d%hp{Rz/1!ˀ[ȁ*n\8|zZ6e$)W}U421;IVz}HRpڗw`*#8ZcVAK% SHRm|:c Gg›wEx/?˯2[f%/|場Δ⌠% z0ir h#a~|B[Q FEnMʨ k˨ノeTڐe*CtS@dV5HwVD3:SY9/oo nGX8\üV㰔ʣVZ)Y-CLd_+[(p6Xs@ `_ Ŏ>RQ9t7zȔ\G沕OEbly඼Kq((c1*(κB ^R$PLK ֬H)Lu.]a&qRET)llƜ_DzU[T#6(y?jr:f O$S:Nq^ PDˎ8[-1%||O f Q<)Q*C cd75 1"SoB %Xǧ3%+Gm!#'" ZARvI0E^F+UM|tyTʓV׻G+,qkZ?*[4nӸN:o+'HŨ \buSO@+dpQr QL~y 9 Ǜ5InGW[t\wq+gAew~4kꅝh.ZpVK ~ n[*6ťB8$KsFphDBQ)s𙣳'B){=t l^a4>]~nMPKȺ3zY&/cg'wvYÏ캽<8Du{I|}/41Ξ]8./5y {_\Lk$LΘ8^(ծe5,4T/z*R2my]6 N$Nh.I[Ob}SjzYDIԙDe*j^\$ǵH870 8t*Ϙxh+*sy>b 8+䳟E$ }Mbr\Dy9TȠŌ+⹱О@b^Iy=}Z 1y;hk+.\GVK\N+2./HMkG2 q(YhGĜUᴈpp)cY%ٲ9[D | %&h˄7 :kJ&2M,1/ 8r*q*FN'_ Q,ڈ]R2)SE,Ebu`Vq@%hQ 9m5rˤ2C0n[@'/9RDҊk[RZZU TRSjAq(Zy1rexUc+~Eܕi?pi?pkM5#(oj H G"lk8ni3CrDsBX>'ACҪzV7wL  ǪVH$1aZ .BSvcY'4۶)Ьxc\r}Lj-ۡƇ֜ɮC2(m߮bUZǻbԶ UMܽiXpxBO8\~;ۇRaF A*ZO{'rz^Jո(ܯRUP̎lkd-п)&Jec2yCT(i=¹דeg*2gf("BA˙ܶrfGyӧ:K?4wTSR w /O[)_w7pZLǜ;z0W9|mkd]o 3(SoVFJ=j>&РE/4Z UnGߨ-R]}P+BBQKJ],Jv`CLyHc}?s,ySO^p)(AUp'Ta,QEw4Z':8c")veq;%T6+JZ,TGA﹒4~OpaF4eK!fGKӅyOLcMͯ92K#%M $)3Գ̒S1$H\5A{&pdIΚYFoHʎ!-LdՌ֎o Ǝ!"iPt)>N(}?,c(kNz$]$!{"#m(XP` ĉv~\a橽:rǂh]9ǁfh+UfK@3*錊oFľj=io6: Fs=%m;f|$ oƜ`ب踓X'1qB,WՌ F74^iMhmu)cA|5ѯ {}x-τ@J 7ɐmm5ԩ6+-2{Iܵ*3nUB l)9`dDs2 aTjO&:/ TH|Y/|ZMm}'gru.;Fh t<C,%wY(=]/ Zv\ xMΔ6Viவֱ݊J8`XiBH%cȼ(FF6Ɔ<טJi.kqD@Q_WILx+ɾSFoy;OE:]zi39jťп~t[9}B˝߈1} @1$iI+*T_e1sƥ9.4y}>%դ'xѸI )i/(F:qH= p\E }7%}A|ԫ(BͥE8( ?oQ"^R2م+9+#Jj?vl(1p=Uo?ED=w-D_%!_)X_FJjU"OptZV]\>E8.w &&v9鯅񳹯w?)n1fn1fZY`?":\$R,ڲ-Ԩ%]*!8 ɻ `GV2}$~Mbݰ<&~[}N^ɵ`I2f NI$kTMlxI~ i i=.ɕ+-'rNsV%U,Ljr+O8,TI܎$9g!%Y^ #[D&Js@ƤavavZKs8VJ x3`x*+ΈB (nv٪(t8di(Ks;?XRqDZrZRG}W+y^|Ǘ*)qل|$"8䫘F"W] {b3F +'D7:)'0-Cc(k(Vf24 pb % P)C%M Ja#hpUS2( #sE941H*FKj8vs(b"x)UЪ5Di}@ECp%dEXaJ e:9 HZ'8Y 2+qDDjrG)}9l/$wmJ6;2ߏ^" v.105%o%K-ew80Xb*bh#aL0pi N1DL4.'iD\! BY}(L SЂ(9~` ?|f!7%[#.=s96p"$x&E]0:Q#NcCt)?Oc^i}yc c1 c ΝC*X/@i:jq4Ir(-'Ob>8)a#t]CϿM&S6YIa'!M|K0k7b4%Qo{;_J/7_tbH $M7(neM:u̧}}0|*a1<{j[B_妯> uw' x(͗|ߋlp,!*)>32?3d1OWm] h y/F0]J8T1J!ƆJB=>XBކC-Q. jJe:JWڶƱR)D7 Tdh` {~ X^j%F׶5%3JRy(LJzCѺnj4)X $VJ55Ler!-AA>VɌJ;[O<cOy$˵Z\7NtꙴyHB{3j:%Gߛ(;NB eU 9g=3'BeVa[o3}i1ZG\%݈;La.}GDl- #=E"6DJn8&EJܛ%UwN"M.MS *fvq).Êi3cƖK Juꛙ^j꫄UV7h<%xc\ S&,QYT&H>*5k|EI'(!x7ob|h~WE.h+үQvL%=-TCN|%"o7cpldx_%z~b;${V:MZmƂS"TKv9%BeW O${RTL{jE2@TLOן;IH 9w3u(?#raci)!e;y-qyDh6ą({.VQKD"% S]Y/'Jyej\kϐN^+ *"A %>#"0hK&YbbҏHb3x*Q6Y~s4-`,2]--jz_O|&M #VmQ+F}3,RQJTER?4w_kAM #C|l v 5<˄=VA|YN%%[ ^*&WF˲hd򲿾 OґR=ߓp jpRj4)yaȎyNq׃`ߢ{idᔴ3ZaO wp''g.\y|ṫ컹U{^Bm;[r;ȍw~n{_![1]ZOޯw_=- e f gٜ^mmJ][W@[D2ͺVq^͕>xJkT(Ǻk0UGWËRҲjHhRvz<}_W;%?THtd K ]R[ZquVq7.߯ݢW GV!hS!"ɸM[i7܅ M?p].߿wwUA;*ZD9S(:>Z7R çPL}KTpkF:ĝpЊᩜig24-FAhOݬG^4mkb5A9js!i9[MQ2N=3~g_N7 .9N3Z(Ge. EfFZ)#ZXEr"mSlsJM@i/QK`a14q0&ҩ#R!! ^hKp"&_G읏k.xȕGuOrTR̞w\;o]t݅;Yݦ Uh/Q3 } Bсt3Q 8dF) B*3H&) eXϜi$ܮ/8#RItr46El VVE 먀1PǗy= ע_ Fh*!p~NÑpQZ2Xga h#Q!ANb\Nl!&fOUNԇf00yRaUz/ +0 a$ !ea#Z ()&8voS.(ۊP*󋥃aP07{xLi-yEp9˫x N_}OY~//FK3 D8btg@b)9gO'H !%r>=Hpߛ԰4Wя(Շ!H{.Ut oY1n7I)Hy4SJv>[dc0ǜO-%w Cz*< G$v*u ti0|IܘB.5I>/Xu(+]QR4Ht3ȗI=W z*m" j)tPZFɑ N5iлPDX_t32"'|od14zx(B$0Kʐ՘ Cdh"$RU`%&:pΠD8XVѠ4, W9?:f;F#6 }oP%*}yF '[?=^?gJg%MMd@ܼR׊VQܭ3mq^g,1ی व +ƬWah m$g_+4,T!ݪ!n^ooOz*.k"j+F='c1= gJܸ"̗ŦgxC`;Aޮ ;ɗu0`6IOGQ$VwկbXǙ)]%EtE)LA(]+woƱ~rmFW]ѽ[]=fmE]Ԋ,fDII(>iQ;Nf!zq6{qs1D9(T 2nȥ', s `WFߖ|b;>|La/I4a̵ k;|Z* B!DAUnח!꾴g]GPI;o^i!蒲wvz{]/֫}]>\zMz//n>\Vz>:'ajB}BF~܄ʪSqu~ǽN@_ Oҧ4^{ /җG ) `!\\n"9ERd]c a\O4\jj9DlDa{rC>ACT\?1l\_ p"G@r>~}xHCjt\IY$٤O =jjtn:f7]l~nFx7h]cOGgXfc󳚬ݕ nOYys&DQiVBJ$؞ɋ;I*.rGC>(Ax|iBkm@#AͺɁ5 TSp.@J1$o^w)% `L1 /|&RXZ3)X : 衚!l(G5 *Pd~bf( 6D! _tpز#% $݅٠]Ytq'3Plф>s&pr⸣n^L ݦA RL0$ø#(03ULloԩ#Up8-j1:eVEpi f:^[NR3oϾJ`-ab/1rsZlGc4R!{--vUlt!EF Y 5QP)?a@"RIQ\!!)V Un C_.]Tc,1+:`nA6-" Gs@4%ę3%3F%+sRnOH~مB7yt~8J*A3ozojO4ž9$JRc;$MJET=tH Q`8Q47SPGZBiR!s,T%h;+!! mG%Nxҍͱ>AݜR'/ {_&%ŝSA\;*~oW?O\~A R lF._gNrx954Be/ܠ"ˏ`l@܁"Ճkk艜O5Ng1d1~mAʇX'LO\3D "Cc:&DwZ"XU=]Ix#;M(O~{_H?8pm=[]tǶvZ̵1[PEcGNM]j.55tr~tb9{r /47W/F+w!nLwX9ouu^.ny3_7W{wVߗjMj;LhG좤:]UʂݖlS؄D^RoYe-^r0Bwm[FVDPH<(6UC1V)Bp[BPz^ Q u:P2cYW D +iOBU;far# 䅅($!3!DDBOV[=PKݢư$`òjk҂j\Sz{-З,r kR翭b"V礶Mj:.Suw[ԺL ߅KeS-*%w_/匼Ie4C Ѫ:~e-??ga3^OtQa?R|@A36{^]Oj0[T'j5 Vwwk:.M6l^V)}3LPfHwlFf(]UD fF_|&).Rk7^Qb2tR>L+mkjSڭǔc#+A\9˞P*FB p $]c&)myxBYNx'K-);rPbMYc5qS} :'oBSJBpS(reA-PZ(4aĽZY EUlmaSДqu=?V ɽ1zdoQ"%<͗;ޢJW_\yW?:0w2j_^k\mv]fx^oaV?E7\}=lݻt 7ˏ=J"aըDMMxR Mk&*! en&#LZ3r . ,˕Cp2s R4Aˋ!Cb^g9D1fIx?|!T'JD [0^?<1`]&|6'N#ÝK2oj'o\h*8BbJpw5-9| JA;mP #^-4X+pG0&T_ǴB_GULl<ϡS7M gTCU\08#"5:VG3)R!c["E@x.Q$ʃv!#1*4. 2"ɤ2$RYmʾl*vZIgKƕ 8)2@sf'*PQp (2u/"wA Z9AVOdVssӃĜX\pDsU0]n cn[Lrc{\p@J*&QF]h3Z+, 4L\h%PX`Qh1(U2q$4 F&蹫l5.ƺ9. Jh$Zb)9(Krs%\ Yt#d 4XKGtٱEI;tW{rkt順A~n|H7_ tIiul $>NS AX xrk}$TsZ5G-QHrVj$k&1rnnkcJ:<ۙZpoM0a2"@ZFP%t&ٜ5Xrs 1-2DL(+-ߕ8^>uesDZXb2oA5mx\&׏i1ms?[U)@/d,˕-|du֝W[w^mUer!v;>;ٱ|bL˴MD1O͚GcC !'[knPͣRwCB. kPR=(?p%KzɃ.>%?/y9CvAUGPm\h**\P(4{XX"= A4~83П(ê((% ("@Hή7;ݏksm,RK`L""`ٗ*Tlveh!H?l.y }Ny–<&bH&ϴ/a3p+Vp6H`\[t]/Uk:A_%l5HHilq2G u[Kd!P5\,za_{TG<}Q inC"CPǒ6d fE/o$󈀦y/1rFiV rȽrsyA|ҙw\L&ETE' dIB~R9 2P2QA^*5D),UR%CĚħyN"\~\U9 ' [ NTy'i++Ċ3u{#0{C. )f&mPz,qo=0._r2Աj%BT B>#omɔTZ!ݷ6} [c cMyRTQWiqy I݃Mpqpjxf?٫lx*v ޹Fۘ˷?w\-CYӫOԀ("rP$M,5XzbXDli!1J4- 'DD"θA-fFI|'~89! zF|*zXkbT)o)S䄄v*YOR'+Jjz1]]o7+^Ru/I~$ 2O&OdbYJ|b[ruwu7*Ljew5<$//-elƝO f:n-iىƉןnkӼq ?6H$65o$@ֻ'ŷ1c[~߿ҫLJYu+.,{ GG3U??^{W[i4\V7…pDN\}X<+neǿnBPZ;|SQ-v;PdplqQ4'&*Dq ,2(?!xdKyA2cM Jh *X9/>|!`& #hchu.}TRcc1t'良Ūeyb E:ĩT܇~\ӿC X(~Me9B?ط>J3cJ!P1ArPXl}ģ7XO^20#4+_?ˋpZ?}x{?!oֿ߹5n#SsOwwUu|$_}ퟩR*_^!+ U@$[! 4mU xYIZ8I TMq /h\XCzSc247q-IlZ 5< XD?"]Ui=R5߻!+Ev XvFlO[`k&8`('/RǺ_݄^Md"wv䔦At 6:][q;bJ_ @Miĸʃb6N[$2G%P2_㴢rN:q\㴭⥰#wUA7ժou6RoxյnĞR`65Z XQZ#2›[߉nQk%e k{UX\ѻ^`ph6$ OI8~i\Hhp k_aISQ)6c,'NbnrR30'El>@آJ+*{R],>g͡^ uoˋ Ch_]|l*7H=13ʕ9ʲ)@^K%Kj+ BS:BhT*ҷWDGqŹɕ *ȥxK (UȩZqЇuc$&L^í%zVǯQI&-bȍDZ3K06 DGTv{Uknz,&v;Q5ԱRR±,ExI F]N ^ɠLyk jb)ӎK`ጆ)=\"1c]z;(ٱCn޺ P&?ZմܚSqZ 034ӲeOr$BIg'ˑr6As2+wdg$<=8*?H FUo@F/<=2P'a'SS1$RS ;gA(2`j М)>[ Z@"R2β`)YZ!k&̢p-E hJǩV4 wչz-HuI檐~9f xWf! jU.n/hֵh g<չz-ih[תqPv+$kLI'L o!B|y[%˥B,q8Ym8 yfJDN]!g+e@X,:GlcͅQJ5ꋔ۳.*RyuF"wPDQ.tJ8"XaBsJC-LFC,`b) Tz U*\Rr0PpdzDmf0Ӹ@! ĕvYЁVZe2Y` 4( ڇ޿1)~ߏ-7Y8|"hTkBPB8iK/sޗߔ>-¸ɜr6 DF<߯W<P2өB*`"-,.L}Xiil 0G"Sg7$ eg^NP^ d"=B`~O3#/IhѣtאG-i6"dFpOrIEjXwtuh>=)˪6y,aAu2*jrZZΥGܪʸ02nIq) ȴfؒŦ0WL|3yzaU]0P>% y>eţYQcEmImLP.4\^rP/K=-zo//9?WT9CPs+&`2#7Y"TjuIM+sfyb- =K9l^ʨ"OT^' ZN22~_bW|]uʱPbҌॡX"w6HGg+kE۾qo*)`q)ف YT l~V FCAݝJ4ڤh׮6#y9um'T^ N&U@C\xm\kD0|JL^|eA܄-P*a聹v'1 XɬFP\˴L̀i`yf!]@*58w0 g%g6jl 2a9t  y+'Ҽ% ׃u@s~;r@'j]~t:?cAtiX=/Sc_?wa="z_|mxF&c^# XZ~h&g2?/ ^T9T{?DYM@5}>QZwzf @R`*^CZrp*4>GTqpз@1`:]׳r"s-x~ @Q6boN#Y2s)bbIs'AR}Ǒ<3%RV7Wew1F:Y\Hb^čA$ g%1/5M{0"nK拶b/I+*|45e*V25j+NBTګ5D,&|ٖ^ׅ-.r }IrD}$>F,hřgeT)DHNEo}y^ D;D5)A_)h]`Ch@B$°:ǀ(].DLNsFD91Bǹ {br0R;F g(gJH[ T܇id{[Q[v?4,}ԑ@RD,LA^3,S*y1Zr՞^3dn&A~ޤn9Dx{G$2M "{acL)P6h#F4Ia{o&ѱ%'+]Pž{zכH4X;*}C78ED~x-ysˢ׋P|̱\DaSSjsW`[ޭ'7WawЫo;\x:^~OVV#% kIG8!WF`lT1w"x'JJ|L#`MC8R6dCKh4rbP^: s7KEY`Cckc@b iѡiaA(@3 D6p5(J7ghDw{spRX=y)^^9hXs`td0is(ȀJͨ4RvZjKVz;d\2&]FXI N,.X(x T$v'#ɔ26^bzV-DBR-KY!y EXi)X(H☑]YbFsz)ٔH$-ĉǔj1 LʻdXOa('r޳X$3M LA"p ?;Hz9s+ۡnU8#>B`)FpXWg$`(;5,LVQ4Btc,Zc(LYFe,[Xy> 3)IpJPܿ%mS\EA;ƒ^KOL$#$%H(MPS8q4 (-Ɓ Aa{xm5@ĩz&ڏkZmZ+M9ºh1 ׈H"!ƤxDL*LDk:iD~"DύӤ0{%EG`8aV}'$*|w"L#dB[xWE"lx>"PZ\?K'L0nG2[d?͕!J@>ǔw-W507W讅# ^Lmx8{u}" <Vk<0:%r1(C KvTb/11wo(j$C>q u$b%D\VJf%b3C@Mj DHJOXc,q_~"@br:ts]ycWr .7s;ܚ#Ê7| 8pvdiqK!['3Y0!I-ղU]q%c Og.N4DKV4D&УVeKF[ڵyꜗVvGeB|h D#v(9vs07OiI?A!l0nÞG5lf05FM} M֟F|3YÓE0n=8Dz,ܢG)p|tg6LgP  _.3֯ AP1u݁)yF4vpW?|&㩹Js[kT=!;{g}O1i3 e.bexoCü{7D]ԃ,|w.VR7#T~v}fNW7v/j!BxsDΚuIKNGxџ+).\ΐ N˲ '3gl1R };Yxܘ ιCQꨢxb{Db f`x("ʭlR^j&T'j/iO ,Y +iO:]%XDMԁ(.?~,$JAKZodY:u_߼շ{z7_Ob=`ae&*Dv5Y im}܌&riz[l}R^\,xz;k#Ɣ*%=2UsJsPfF|Q̓h;Q߶(䬂Ol*FMKq汵a;uI3rd)#=^N r~0NˀQKY!u:hU/xȒ bv;ٔVSzu Dx ?kB;D{Ⱦ/7XIq9ow:D:V-0cO <o гu,ךk8 #Ώ]QiU%hwG 1wKtGx/ )V?m!-Qt ƥ-څ%2FҮ~ ]`?\+K0>ymd(#1e_죠 ؆X 2/I ʶދqnFj]Yk_~]>y=##¹XO"%B0ʹq BҀ?0<(Ք{+=B:"{n-UAmGIAUGkKhkk^3$lb.P]콺{Wj+RL>K z˲qԣ::"t6kYIGV}fu݃tlU\CiF3dԑ!^!bpi|su,'c{}ubuN w}7<ೖ?9Z< Toru8KGnB5EKKAZm4,"O j< C ੓mo}C="8%A2jdtq&؈y! kk0A-:jSe &iaܐG1BJCMR,X;CӨGmZ-M2E:m]U&@7zZ+NCxpnVhRG>$c vn[>)1ºh@X4^Y " n GcLPco[smv.Uy˗tЈ~ӡ0׌,0sylm HP,A:XɈm6ό@)H{@pT?|2nj<p&sC2𷇆6TG] Ɔ'ģhxC0qL#Dd) HWAh4rPTsYhODP%΋13ogtupP|NMq#ƔD`PJPj0PRu`,&{A?|ܗ[5Ō62эT [Rыs"y%6$#5SSnCȋó,́Ƙq:`(=8ҩlAYFF=wx"0k#R#jXneF3_AFy ; bg|$Hʈ7 cC4% 1 A9.}2*>p,! N;Ls:hFU*Nf @ǔLXܭuRa"D3$ܥE#> nO>,rp?M,V;qWT~1?R`Ol9}^>,Dz0DƏa5*"{u}ш`9J?+M U /{_<]Lip) o,m0dJH!v*z]q^7ֲzkY7Ki-wat;A@sX/]+ۿd ~7 M CpFL qx#E"-pwHoí+-oWePW5Xkֻg_@ 9ň/L5X(ٓÁLz6VQ3#\]\<>g_˷VAMͻkaBhw g^~x;XAD+gr,JeMm1P&4s)=si{|?VkJ[w4qb&/'OV.m{K gAd W{'jo4E@dB\UL ?٦-ZzO k:/ ?{g`5m~jpTjQOAɮIU]ՠ#FG&vTg!CJ"yY,<'Z.]l']#Οd Ϙl,]8{BJ3u:tYjZNϷASL=CtHw(($=AF=ДKFS * F46"{ܶ K/9{4ಜS[gwZ7/qpLKHʉ7~z^$(0mQ ugkqIlcUcnǻfC_ 3T@2z<$SBTRpcuNY؄7iB܃u@gQ A/@)I2zl@ltB0\CxRq"ѣ LlO!?~6kR+vC[ RLBDƠL#.Vi(1Oal0R!P†x 9hL;~: BBX*|(M%>s}ovX TNZA|\|kc͆y,q 䱩ءm,stA3 ɭB飙A:<_f*=J|RUlm>Zƍv o0'QH3{ WGpL9/ܯ]4ȧjs%Xp2dNn'ɡqgTɧDN ,&3O{&<+w r*`ZRkqI؎}*($gQFZ7tO=gy@V],)ۀ[y]|hO0x jH~..ܘb4U$kQq A%Yp) \źJhտ' 1㍢xFbMfi>1˾C.&/XkvxX\|AFw?u;C_e+O7~R(p1mMWT! d}1$湓 a^OMdWgO[[^8G"+һ?AFpsq9]M'Wvګj %a +GK>w8T^^rn[?;X>^ 4'6d1iK(ڤiPZb -~~J??}9D|a_l"6F-:d@l6쵼DRE7 &OwCHFȘMIZ 0xcP{s6Slo߿30'a2rbE5`]٭3ÞY@'RD;AϹzGIqa;YɅY48CB8e` э k Y56XÜJ`zB}c֜&=Β LQ̳:F#{*t]9MޕVkgp?f6ɾjQlSI$I ȘE5&l, BgR*#_8SP6W놆+w.(?Ǚ9p0msKDY"q1&I>y=I)$pʌLIYdݖˌ)ObpL)D6"b6 5;fqOY4}HĉSJd*(C^(eR5Z0J,$or m|200Ʃ0~ M@<| >w8t*/`dy_P3H$#8*b•\i*m^RM(Ч-e$۞U䜧WFz94GwJ N)B}&DN&#xK{c]Q>u}z|:Iq ÿϫ;8zgG蝻8㔁>$HT1id*afဥa%>r'F_Xcj{ }haӸoVOfwI)Y{+{/RBʝ]H )wBG#RGBP@:RFqHRP "8# (FB$0vwk&}eGTHղbe8~aUBZ0Sy 3EȫsֽVm&~7!/XK6wڰj;lWkv͌%iŝf`X?o X.FS,*ּ5s3d[M5*4Z+X"-麸*RYV|7LUp>A:sVQf2UٽC|ߟ%vK8@?5iI;_} ;;m!ep>v+l 6g:y*9qd"SF18QRG4vQ IL%|2JrZ^mc{3Dp8tG3DH3}wD#e,m"%܃Gp'DD@ĒDg*cȘ3t5& s?:X>P nWq۳AjoZ=W*)ƱeMbI2I&3Zdx2 oMb ё)\'ڽuC jKmY%jfV~5 5 Tί! aSA^ސГA^VlZ$VM!da3[[km`W'rO =~^bUg u@Pgr^ܷbDH{ V+O_ibƙBE/u'ćO_iMZ<-󼾥=F ɻ;_QmP;XDbY:`,P\Izf$#1ѧ&1r1՝[\HAnmA~߹VV%Q[j? ZORk4mF28nǺtn*C/ ښkTG&*ާl^;OJ8cF{(>mٵΌ"v^OF,9SlYnKle9ׅe=%]=`w3[1w4HT~ޥmFeiBQHn7O<:/z\ .[oe^TR4^zwl[VU 9_{5GpL9/ܯ]4ħ "ƨEt uBQ%0apێn姞ѭ ]4ȧnta?|ܪT^m6A ɩa>Lߛ f=v;D'o5v0+8-_Ij&a7LxGJ%B0A /h@jj xOqUL(m4[ąP,5=v amG*Y BP2,k55>Eq?_U3w#wXD%g&K-ÚȰ& q]vX MDk"OanMp$0O M ^'Pw:|=yLa0{0b6]T>?$|C}XCݔJiKD%.{+BhS'<=GGGM:[DSwf?q@v,뜻}!6^ 8l<5w>ɐ\ FýyXL0ޤI*0*Y@St$z ; w<Tft 2<,l4& Rb$DFcM| Ki}bK)o.HSK_DecHqЁjԥƅ,sۼD,A^xY؃;ۣ򒭟 Bt~ nm/KZ ߥ9( qgeD#u%uy9ӒQ!p٢F /+4?Sw|JG辊\ǺH<e -BzH),p<ϯk!\Ar!AAJf1rH_^E⃰3 \ҁY*=%m!f۶da6\Q8p)C2]@+(MC;dR˦Cǰ2Tp}X Aګֹ+@z=5=4z=}ou\^8jyStsq9]M'WWg(/kZy—C㱞 bO%A'6S3_Z~i 0OJ$!zHpw Z64z~J? }9Tb,ѷ듸km2gcO_Qwþ$11`2fq7cymlF;7.DgLh)B(M9C,i3D)G<Ҋd1 _LyH5./jF_#߭s]=G|8&l, BgR*#.^B~4IVBY;Fwኒ)JbQI/j*&LS R Y^]'ɨOx}jJ>-a˯%XF{fȞOwm;z e׽X4  A^Xgd[G'=8ۗՒm[[n]Z#ML,uW,fix-ph%,E8g=w1;!]1djUMUŦd@f?h9qFhCɧLp 2㳤h؎Ll;F1ړE7n>k"Qt(Vnf+kq>+(( iZKGW%v]I2%O9^oJo?X vɪ R&HJ }pp7>}{y0Lr?n?^Iӣ3t鏃=wsz{ח .BT?ݻSj_*ֺ?k+ 6-1eUpJV|ª㲠(ǧGKCFrj8"gJPh,e =SA])c^<ͅYysnG_UT-,.†?]q@Qt$_SgDR.-WXaPfƃrFSS.ӢYPt13%#}Ũͧ~)E ꓪ=emqL3\d혈\q {#7 ekY! 4z#jcK7ë`P@!"#bPq8ShT+U$>.քm(}0!cre En9B14=0.R] \ T!(F_5yNtG[ X[ f-~ߗ ҄H2 Qoy"ʉ8ݚr R)'F#suơLGw!)`yMPiۼ}2JSA-Ҋ I%Kz X %b!j|~K;0nhN0ד]נE3-Y~ ^vSr]d9P5ﯚVENp{5R0.j- jˮ]OkAhUӖMi3f4蜶Q.٠77z`.Ʀz5GT0˕^>_2Vbҝd;VJ[nRjJhB 'y-nԸokqpxʝf[e(js<OU'^A(JÍKb=M> p)Iz.Mn}`2:u,:Gmynf Z36X+f@qdukqUm ikZTnRހrYym d>(4?fջf+5t[ A #Ubo% B d'N$_Ƽ}iw۞LyU5'_:HwAD(kק3L355t ZT]C+>;lSKF$7aӳE2J-҆,Y\BK/ǿRZ|YKr{fޖ `lI_Gͷ  .(>j?f̆<`۫TݤupjHȪTwK|cЛQDVCހkNeeMt:;Bhpo0vr3yV)4f->2_is>+qYU"˝@N VXH.h) &#IԷɕ(j=#m7ː҂;¸IAЫ:1n4%0=-j)}$@ N, e4hz"1D 4yֺu7h06Ν2qt1 خN8jKxv̡# ]/IA/ˆ4TM^k O9heYw<#x0jG^LV@Ψj|!.x@?GR5m3E,DxXJA43nkHƷ晝jքbkک nM(^m=~+0ֹ=+uVS6|i9]I9K*oK^mEڸ2r0/1]9Ŭv?@)9jO|c\W)*ԝ TXGfھ4+l᧺ȷ`2L8Xe))_GƜr-pT%N):DLeiNo-Ҕ9qDRL (YS&@ueS=<9Yg0oh #OH %);>B[ѽV-9@ʕi[2ᜍ$7Hd ^KJ>Ij斬xƪ-yf.Mq.- A;2/W o6ea=7*AF!d i?L$(}@䰣DɉYL:ǂ\v.w PèZ"g3Is< OUf;2l$dׅU}vf5j$^+s֐vƕSNTrxf*L=SEEz{s_tA{򝗇MJ|LSF隮tUIg^tUIg^әxpG4^g&HxS:kcNyLBo//zcT<w}EK~?I*1 ޣT= ?3{T{蟃uo[?͒@jûqDYkay^^l>[ו]q/?黋M F3_~f0~Ci rѷ(B%-%!hyJKazw(5$Un@m30r2xFk`% %#r?P(1Z0RNaG \c&60o,Viq.1@,2<ԙ)J̒i֊Rɝ0|&%AriLF%R lnaF4i %/mCPL;GUf ZFmQrb cD)OTd[H"!( * hc8@nJ$:*R12!X|NōN("o\~tWA%SnU['#n-FʷA'WBE28|JTeF_~zwSb4z=.7ga`,_'7^On؛QeƷ648A.ldv+h"/W1Hms1}e/ASe?yl42h &ȨMH粿tdMԗ.y3S~)\AwY(Gp*ap]K>TJM4[*iN-E_t(Iyr!Gh3 I;+~=M+d<[4b :S/\1R*w'{e憱߶bxiC㣋b#ZPT(ME6+k+3GWmj:vE~ƟS6W,nM{+ŀ$r f;tfP e=?˃d:>~Hӣ3t鏃=wsz{ח .R$E)pw{N:c%i¥Bğ؏,M M c?fͯoG _EZLkтMKeW x*F3$~hMX\~j|;SeSA2YqOKPBv%ؗ|VNNm -.Fd)B9-7v-e+ 0J>A 3Phxq,lCa̻e%5Z}\i2s%ȣAcr1 k$Z20t[+uNX"RLwpěL<})R>=g8sS*j‡ȉRwњ Z_Cd5̣ˆQ᛿DuDcНd2ѕ'!%NHGVD5m%51ۈD!(2$X5\Eb"t ( ^dΘJnzAm4|Ex<|+8x3ddDiV87F?0e&9KnrBrGX1hM 3i "u?{WF E63@/а=v<%&)F:"z⃢"Ȍvf/Ǐ9GBKq @Tfh!Ӣ%"ƮVhF`8pð(c4s$W h\,)Z4ֻDWfڧVzl"B'CX@ hL&ž-Y@ma!d-ޞ5W}] er՚[.(h8|df"~B/nHiGF1( mqW+_I\<&Nǃ3Qϕkw_.R~yHٝ ED6ZwCI&phMPR:}Њn4 Еѵ VT# aS [L$AFAybJ.{oU) ɽ:\A~AD FH=4)B^"WQz׾JD=r5kPDt5Fi"}ĤSP9xghTLg(L`úAno/m%.p)Dn0`.&pmqϹЙ?n.]lPmQƚaw*a >*F(AƐHi [ #Ov{[m,>O<"!4&me%n>,Fm),ɎrP b$L ,de:d:Ff;Ftg簣 1<;Br^J򟕑(<+i_]u!jg?3t-#Xc BEARJ3S~0ɴz1 $2A 9ۊ5G9NҢrKےѽBK} qK42*I%zPPmwUAbxT&=ga[S=F܁iUg0:q#"P&@m)v-4{\dzo}6+c.$1Af.bʴƻ⌚΄kE/яHrv{V),nڡ_CѰA(9uWؗD75ј'~n92µƈCM>Uٖ*}kj*gkÉwU5/B7WsE^}W?~恕xH)>':Jg:xz:j\amwi¹š hnZyiWp{PY5|ڍgxN,W\H -X{f1;c'T1lF%ڂəGilE4 {(M z0X5 98Xev-B\)]HmE"7Oz8\CςB fO؅ 6ԥ.DA\8f"$]F5Q<*$ Qbyvβ佐(Ccv Z(W{Т NUO:FlTa;v5ݖ7 U&njݙ%E>QL&wlj5-ꉖG4,-ŸhؓIn~t;\>5%H|.RѰ![3Z?s8|Oz/+8~6kfij*CZ>Ji=~cz|!:VaJ-ޥu}@]ӌv-|EvuB3ڪ$5ҕ$Z-gܿԞo)Ť:&-;Z)¥_޻RwA3|uB|NþEhr}ơܘ[1M]8?!Lf7nE nzo3cw.R4~]"Rm+"8ʥKpju%.㲗APTָG9_=;DVoKC ®(,=/YjHPb,SFi\YdΠ9ZUiZIÍR.,f+@?M.=+UMzTK*QelASg;ț}7ZV[s m)]A"ˇ!Hk|_z~}xJpu]&}OSwQ()B\K p>ؕjԿLe h}SB0)Sn.gr{&DDP"5LQ D\\: Ҝwi%C+Iҷ0jU`G.Fo 9\"G1͸?Eiu`.2ѪRrNp ֖6m>tk9FtqIӕ| p+g)1冝|+F9]PIREPT'V4z<C\Ѐv+:[~W[t/?$jioi<=i:??$5Ǡ%])O:I}2MfVIZY 5bUYl{[SKj9^2#_ڒ!أejn>Y&TMYmvi2YUы yZ_R&"Xtӌ_Ϡ~K-Ax{C;&ճH+_lK% 25b,@ wk|J_C#_={;SW}ieGH5%`+sꐯ؍vz9XSz<}մ#>}(!2NqޝH a電5MBM! .If{}G& D4Kbu6),YqȬyWų9AIǼ*~jIn b~naQ9Qg.'N%+D&*E9hgnJ}ӯ^Mӛ&UѫjIFP>zM;BU `::EzpKHR^//PS""<\c^`~,]fL?{ .`&<;__yK{0;(L.!=4etO^uEuEuEuUս{BFQ"ʺH,%,0ƹt:&mSR9wp#ϧ8? C̃a\Eo{zғT>4{Mg졄{/ffwE']7 Q/-|&xDdq̂\fWJ`KV#P,)&5/>CNDN2O+%q9&Le(: / 9`D }>v;@:@mDОӜP`f Yx=35") 9P@dpRrndLFCs 3g%ZEЕCg ܘ~?j4]&(˳i]6kA- DZ72-=bny$<-ÜSn(ϳkbBΒ\Q傌($Al x +֘wIC&>}!Y5YK&# 'YP$d42z:]L_ RbmhT^GRiStq3F3jʜjdUnA4[|9Q|MooJVf2y3kt=ͮ Ћd=<|_^_0|O޳q$W~9mC8p}g#N) _3D-I_5 {HQlÖɞzuUuWuUjH޻yݭѽp<7|я?_OzK@% wo+o髟WeW7}D/~~MVlX҇Ve%S}fb狵htEoa|6 (a@ٻ x b7LSI=f w'U `X9dS]R'8pˏ,!UYs!Iy& X@W,T쑭,7Z fjD&x(EkPc^V䭕;0J*#w#ė;#.n&,Ov݉=jdSG:L|HyǤ6&=$v8o i|!7"ˈAJ: (#@y7` uS[E p+s>s5*ջ$L.Qx,D9e`M.Ryh'O/zal'q %}r;#4-YD\ؒwC^QfA )Vr_F|kIksxo(-8w@h4yq~;N8?k#ZKaBӫ~m-aqM #Axӡ2"-3ϔ2(&yn W24#ղ#$h$ )RJBB;JaFq @)uhM0lX?Ym;'EMuOfHuaQ^#GJ[8fh6\CU\~LSUkES:&7Z|`؈ ab c;d#,0.]VB#Z" {~|¸( q-րLG wEYR+QP^h gF#5cĬWK.@[KXp U;#0]c-fK 6n\QZ742\{x7*Ԅ #z NEHr/śi8TgW&容oSpVK PaP1>E,wQ΂tZ[h|J`* "zTCTI\0 a/}v.C kTuXpmG z  BE3Jc4w䨉s Z*9lest^Y\'F:6$JRw~?Y*Ms@.#y֠ΠHGAۀ#YS"]ƂÚ=K4;56\Ԇu㖃Qޡ_  uŮ@ >:Z<;DJKϊDv<l4c B e8`[gz<%Ua] -~b! ht(8F,b.ɌCk& )6S+;vH(M,ﴘ$SAKĤ\ N 1~4?cEv+ tuGS#/<2Y;ΐ"b(ÝYQn )>Mev㗥w⿒8?tUqkͦAJ9ۥIzA;'.aGgcz < QHKXf`M N 8:Q<̱ȯշoqQa(# hLi|:v㌐n4h$:hݞruT]8v@B"%SFmPW;j;] To&ڄ.W>$ZAB\֡:8[x+ȯɭ`G療D#![6;|ib}S _uŝ$^L&Ϳj/:} j21#IXx i:x![hOgcme\c0z}>2  q&8V6`86OO7JNy`d/R<ЙԆ|i,I7:aѣW۠E6 Ma7YMj'b078 cqXsdKUlTsBlLPj4GMhF TOvhKW!nnR`o{_h~ëB ( PN`-/ߜ]՛R]iArXu#x3Tؗ-܊t];a/jØd9&7hE;P7 =";NAlfKAl2Cm "KM H sv>uv[Dp !?}Y{w3\iuonGrI HG0ɽ@C72p~]&ն2>|cM?*B^ÀJI&q(԰r)X>#%? KG!KgÒ]Sس2$jR1{e=zYzpJz@<}ҟ ^C-KLp2x$;D7'l&XL_p2{Nmjk<#;2 V{Yaұ6G?|gEqPXzZ7J6pMV5o{sd܄/q6.٤byXx%%hSCҁjei#10l@@3Cō(X7yYot\;:`Vj7%#wڇvɹٿqqOɽmiu~1}:~t t#yǷF 6]ƿ=j׺bGbѫUˣb2-s^qxF`$Y+<ܤׅ)Kjs k^Qx5n SҜӡ,i;(09Ol"M+Yp,Q+Q1AlВֆȁQ:^ʣ3s=[T$-P:LQ,-[,ӖH.01TJksH8~Ie@W7G)~=T´Лwo+of*(n4/uѦ |WgŊ;N-A3;_KQ4}fڠY;XI*mV [+d}|5&PFΦ %f*Q ѶnREOR5,$^f\Mlz*oR6k5u~E1,_U֕VMs=sOpH`M.2z쮜jwź[rAizAdm%c.8' 8zsH Ձ`>D;c&RTdFRjQ[Orbirqo-4XGSؒx&jpid<?jxv.JMoFu"pW_n{^MVO)zǓIr6\:9 $fSwpѻ7S7PEEҭX&8 `G6FFWN'q  4y!#R.zvL eme6Fzj>PRMF{xg/HPSRkVdCzBWVbsRUG~D5;n\C["h{9RSs"f9GW+!<~72.;VU\Q*.NJ,K; %;]խõL['c=TjQGg5=9#k`8/Pz^,ϻg`guP|6uZĈhr 1Φ)NG<&hQJӯ`ZSU蹷NtXߦjo|t[l./hO7JD]@sk5}sExb٥N ħqF]8G,fnU REtGIm`~f]OAEeo`.h3#"ziXtw{sw}Sa*"uCV5Ei'=_me|)vGTFJ}ok):Jo=) rBu_z|8Y(x7<\)LEЍhsWLYD6R'_ F8 1h[e[aZO.'OmЏĪC"))L">lbr!I@E ֑CїsWV;^lnq͵{?"Vc =YiGQ[aFg֒k1B;AbQhr-5T66T"DjκNޘFkQ)ޒ$4%Th<t!:3D2Ev*)bń^݅IcB*fB EvʄL0!IQ-aTF9:FJ-F*XwWFJK$Qi+L/&oEI% ܔ3.yXSΘ744xX@0y= "|/`2>gW7!DVTךa?! <=?w(=xqx11uu6'p+!D2ub?`$_u @"chRhDHaMk*u8& #= )mG2 }BZO O`z ݅ivL6.¡ze_ ^ކfbDq  ayy `0_)D^tbK\ !ՍRz/k*[ah%9ʖ%/tCyOBK&^Ch|cz8:#4gxhɪM3cG/}"OQnze}Grz&WO xz^z8c#F.>D^?H{DktGŧd5J\}OjpqevJ~"T_Ѿ(E[gH 0#s4)8i2DF"/z3L 8eˤt˙?).",⒔80).'2aptҡ+<__0洘0(03-V%9ܦnDeyuU-$*vN\Zdz\ŵe^cQh\1Z.kr[99a}qL|2 Pπ˜IQ3b'ńtLfIgD%%q2T4rZSj+qV)p$1,k^4ag; 4B- 6˗.e߽[ 5w[#κ+^3CYQFqQL)m vzfגcGa2'$7 ,K`l^c$6aGnIIa8:F vIĘ&Iaq+b԰2,IH:Zj 2^K>d@ 1Q 7V!:hsLݶ'+ecW$VUʹ3`xӴ^ea?_0?mv6ƒ, F5\rjarӜzPa 9',!_ DuM_w<`i=F4j+kjpzdDIK0( (8749Q< {w>KJjM0H[ cDp3[38Iր*&y~þ}Kʯa g*tβ]J4 ι[bq̰]6ԕh3k2LMX-5[KMDzQ>EPQQE$8z!"*°&Dv.n1rv ШTϴ-  G [d/!.B &]J.y ?pK\pWEX^ܸm<_ tNWtNWS{V@=~Zjqg˄qvs7 J^vϮjX_-}"u`6:IݥkNe+8B?[6p/ e/{S> T,LBPʳG0d0aJ#jBsu i>0eXks mB &-&Ep62ɱAzCsAGdb^(q; -_SџIit%O$( [tMM/RF `nĪo>rT|TT,bU)̬*B0ݠ#ĽaCh>घ=eH=ΘkPH -:Y~*ZԂ9قRq'm%U sz҂NDr K ʥSs`4\u6EZE_i.l3Jle9>x߄CBGCF[".Ƅ"5om'qv 5lw?VtAAlQ.+j(Ƣ*ԀW X;T))1r2Os9Q@|cѐOiz|/8>jshcL\C݋Dz|ahQh,PE`glOmQY1o{zxaY9xDgi.n1gih-uI:r,ZvaJIlMF*¦T**"*~'LYԌHa4)%wd(OV_bg~1HRK†L{iAg\ͅVYɬ)OLC]8|=Rf-ҊE]",4&E/PF| I1{,:&O8t8[/'`^*`>@S/£V҇Ϝ6D[eib_m]_ qXdC7cC*1RPP?Tbr/nm k( cb9ȀK3'Ņs4S"aB DcyCS.h] > 99Zb`2WiިJ5HyC sy6%T&2J:R*'O > }j=\zӝޠyy%hبVR=DG r^p1ƺ1fQu9+l-'{7H"^Tn}$KĶ=7azlr6];j.ZtpO],s\\g5×/gT~IgDECȔ .ZѾsy^gyUa[|# \lmj=ȟ# }{L&IŢ~q'-5Xp "CjdRDߗ${=ǫ~ E<oB}aYa:)XBsW7fkH5/fY?TBhvU<ԠZ#0\ଳz&XE+k!xKe 7i ! e ɻXYRӊ +c34\QE:I)-{R ߡ 1- >"Wt(E7r<_4sK/ќ{3z?I+u5N+CA$5CqĦiD 4.2(w=ضISM+0N5С=n/3Z} #@g;`ٜt`$֟uhUAۡ/oQyfä(6y$s^aP,{0v8b"BOFU)22\tbs^ȉ.tZ,ae}|mZy}UQr5N.&h40;#٧ *NjBw|6 dǤygBA-WvVLԠZ^P‘9Ftz-z w ƸuO>(.qk s6%RT{GC.9JŽmiN=!KM墧O(Q?}, D OP;K+9aՕqHհZ*`'j,6FF;i Ǽl[4<sʡ(5|㼔X9RD)Y/huhxi >9Q)Ytu'"n%e;}HhɥhdFOv`{:*QgsZ&ڠN4SG#Q<+Q൞e\`A,TKNNQ55l|iATVF=z]ṅu4@ߝ~heJgB˽ox޵#m1K]X S.f3|ĖIv[N[I#FؒůXY$7E (y+旹d.sşS=2nн|nt?(nxd)K/~26H=?\eld]qukDte֛K @(P/G^r(/$ܪ;hkwɖFk6mI$ֻ:tjhv8m -7 1|fvY _ ?Ri\]{*z}Gkwн_xxw }gmK[:8NW֓l26_߿~}-W{c(΍®W[Gއ`axtGk SiĬX\֑?@qdǁMBĮT}4ˮgjf[{#렯ˏ{bdDXːmGߠs{j=@נf#4{LOZy.nWCT`w҉mLj{ZkIK%~~P lTyuN.ޢךK)s12)?0O L⾽G'Ș}T߭/PUԡ'eFUHi%`>{=.1Oa˥pGhØsBqn~-9q lpic)1fDBWs; V B}蠂P4S$tO`DIǏtl<+rwT/V߾oQp74%Whh1أI6wu?ޝ<*/WDw%fʾL x3jƆ^HD~c"-oPZb0b9821SE&x];kܠ >35NeZ֕!]?>yDI$u4܃3?⾢u3BX ܁2tsqOx*XPSB2#QbѧZ2Gv4oE?UtDճ>l8YO:MRG df2骜ڥ?7vtܯ 8ZL(7~s&?\]Rzs7f `=/`4ӈFrh&ISYWZ;(=Q>cMWܨtjr`zeVћ1RflǏ(h2-@w9/5lKES:4R}[bsM(DrZߘ97bGU{6A\5|XNhOa֏ۇ_~hI}pIbΆYAYY:?fj.OڐPWt8, \2TƀZ\JgN݇RuN|VSH߀tvWvZLbM"C*yx| ~:+˩{*"h_j^7Iئ4;Q/4nױ/ri(,tEO/IXUEj5:xhrKD-7kJ,@xzrC_۸'ۓДr䒷3=s6c^D%/(φiz ¯ԋ`~,)لšJtZMJb 4+U둭c53~DIA\4D41A\ƨx(1Ey)FB+~s-A^At֚ %PqX)9<~@E*VԹXqM|$_eGg2lb'%rN@MfDgEa;uXw(gLOJѴ>4ڧ&d0>R)YQ",-/g.TX.72Ug97cwD7jeܑY6Sۦdr ţ*OBhyPgf!ܔ(ZثsP\,eNg\~!!mӛЉt @e5}&R9b =] klG4#6u<ٸ J XyjZ>kz37T&K<&ۆsml?_\Ś6Qeծ FqGUlڔq \L>],eR8]DFc94Mp+񲝜nwxmT 5 R,锰r0΍#T#&#bkVK1g̃_τo\E5#MŇIt&N.rQ*q"}/kKvA`u~KV󗑮&ТAZ35)4eR8nֲ_*?'%meyVV{X[HG.ryN"3>b?,͂i 4jKMjJ1Rlxf#wUu;3ߞT~ı$g*O>{,cZNLq\2T Ź]@{@,FpJn,Ѓ@Fk6}l ٰy >3UKoǰ>YpxN?7ʅn27.Օ2`p8?+ՄF [NUg$Ls pk|z}n|&ƚh'pr)C$0GwXZc)pھ׶U.kZlrs /;6u  -aﻍ!3+|o,jbn1h/;n`-MncF3X09)!3M5=E x*4H R]lFLn%Zl,N'"_P-nL5b_KrUJ'.2 n$H1mCq$ =ջ`_M$`y8G4%+Vtr%ۧNt$5#)G0(( 4smS9&*:X(,9w] =FHͩ$5*&dꔒSm%@E]U}I_S}JӮwS1qoݔa%ֻg&-!B[{:6Uxx)po~|UL.y=ӰRpSXkA\ <MփgBNzC0Xrלf!$ ND@֞39r2LC#s < QZӠ K@Ykj<mۉv\"\([vXDGNDyɺlZm Xv`+$8FL1o#@d! AA"qڼ1Z(87S-*. Dq+!q9k$ NbSQ(b]~Xǽ+\e#([JMhg138C[~a8 {,j5RQO07C F")ӧhO"7y~Lqf(b'AnXJsO'7C 4O,7C MVq5MݮVUV`}1\\Vݯ/?7'{{`m*n?mzW!!Lо!'/{6j]]f6_S 0>]W7G0u/<Էh[Y\F;ܔ`>~7r<ԨTv߮ӚC`Bܷ R6yW|Z%{^;ץq5{t\LN7Cl1m 6d*j· ˪Z`F%gZ=vV3q6S;T~9Y0f s7o"!(^b /~FLψKٶ2`9{1DDu DC{\ |_z3M\kfy. yFOIcn~*u+7;dd.)=ճE縑R}15GD .>y…G3Lxwۈ0w^ίmO^`Y6/v&#R;?XF1,~w;Ŝܵl?O=Kj=ٮs*|V}~t5Ɨy+v5j+g&+QGb"<"V? Ga QNWx`)T!sN:)rI{$6ݽO?uWia|ac׍D̝mIBsк/>`:_ݹ蚀{/OcIք'GmcnҸT fVM,,x kCkGx!,I! ).| &f jR^{(_ f!'yԈjh0a>Ng9 |'XxNaC`  ixh^5F H-KqLpׯ`bS`T 9*=04vw[$L RJLLjD /H r48).Ƿ__1T3?TXTk#K3X74>\~m䞌?vcoƸO)K Xd42$2NYV:ײN3ba%fRWo"9r(( jf||ȤҲC-~L*{ 6L{ވ\' "C)7)0=fK/>}e*4\J0l KHX(E1r[y[h2TIoUp{_jgI^OY(5HBGeqidbɌ{IC#Zg/+nj!zb1qYhe:tokhG+x0qYO^ͬ 4U#V>P.*#U*OOb5>@8 gM&I"Q: qDq,&"p쀎z`5i 0VggG (eu'"S*XH0G,brYe(axa*_OFvd sF_%`DpshevpbȩVfТKhZ `(>l1`!q0( 2a[*O\DQLq%)iz!ȭ^N1" rE XB (a\Vp+m#IESTfFD3^``ր-[Xf'_ǀ@ٝh֙X߾~: YN9>6e%vAh ͌@̈́@Mt6feo8lf5vV/;w Y ;x v4YC?Ҹa[ KTg7TFeZN/'7RW*暓ߩLm0bz`9^ \/m.j3rsytuH<ިR^p^Բpu9 8nԫz\q!=Ż[OS\cΑ+V.!E+a5I+lמ88v&5BYwtʺ=xRnk܋? ?jkOդ&"VɛG\ĆZ <-%& Ј1`{ FhV:sAވz&H^G-6|׀XiCrH 9gPU4/JD(*V+ 8$h9M$vG Q"9FStF4:Y.,zmp!MY$m $Jx(E(E3ZiN($\Bv8^)I`X% P5!I[֮A"0dZ-e+j@[rCE e,Ei},ɢU| `س9 SL!RFHQJk[&URǥ|TFq=5.TVe&h\3J RZs3_hZ)H<3UJmYxW=PReLwjLztdBuo /ztՍսށ˧ߦH@JMX׭ߙ1VI95 5L: 1A`//Y!p,QCȕB5OW>M\._/}vz wPa`}R~/Oǣ5[Fm,tVoc&ވ#Dn&,MlۮME阁q=H1ݍÖ[ܸKX;7̈́ƦxՏMe驌%n&5Q8?ي?ڜ ݢ7.?bŚ59\L}Nyx,]>wﲚ_FˠB,_[= X1;XOyx֩kGt/2{w/՗:|d<08?smJ5$W=U^Md!?):8лC}x2h11Ļ/:YⶽF6&:ʦzT5|P;6V-3;x}V(6F6&ڷ)Z4KSNG:?6 +G2SARߡ@CLi~B+I٭6Z+hKAl %IK~k}/Gim"9%:$J=Đ+ta]PIBS-7Vx}vY="zL>xKmz{x{_1˽|yA35:s><zjq~yXnvͿ\yǏ[J=O9s1nw/Ն\-uάp8qU`oLr^̿<*,Ż"\t\M!Sj͉Nu"DqtEYxœŌ ~]|~ZuQ:~'^:SyJSR =ZDeDtIPs'օTL3|5WQ\D‡B68>R";,TҰ @uB |i֍ub[){p/Ӽ5 9 *E{ @Gv..J8UHyXQ81sk._sv)ot_;s)rHȮUsU' B 受~:J^aS\kCgYpao:QB"3!t`81 I{[]fik% ÑsJfW8䗂9}5NfFꯀSȝ=dyXAJ!qkaLlR|w7Vu0Dǫ4iЃ;VT*dl)Z/$ZN,oH)ٙ9E bJp'WV!jNH0{fZ;W6.u=5I)ce )Fݧ4gW6iyBaRcfW' /l1A%;\*cB:)cOe;y9ӥS/L '0P(hIp B7m~OS9$[*IDc:-D$9HZz=~4 ЄX钋Z:ڬ9x1=m<'-/F5}vs/g51]./+sa~smC$g)qjOBwmHKd $8,4j+e$I߯(#r;uǖZbC_IEJ8|n)78 %묒;AD.3ЄDAD͵U'7_q-`ӈwRf(}rDr$J8BILbL;ॊgj~ug,#O$bqjB6 }o  ?79r61&$wT0 8Ǡ5u,XfAA$Kz fQɈ,W?+kqDmQ/nN_$ ׯZ }:0|(1N4v m4s$3WB84$K^&O)PJR4e g-2QE9,}M6L F@"}8!ᓞ_Y%p0Q sPV{H9j 1jȟV./qL&R% &bRڊޜ`^nV>Oh VIcN1ˀbD8k%aAo( >H_ۊ?U=|R"P 4&!e FE@wb|R, }&l: d??%$#[Y$uqVHw1p &$xy?^Iv8 *|iQ Jbc(k,.0t\fN[^ZEN]N+X}El'n_4#)0 2q)IzO$zs P)kzMLVu j6YTI촴rvOr"x3N7冚!B;nH _ BC$HGA>J+&J2:0)ы[Y!TbQ+On>(KR @ hS9.Ol0c"IV" ^s$:Rű:guYD3Dpsz=Sʢcl];[p8-}7fƾҙ6TG, 6#B dCr$%@x:kYڋjvحT^Q A:5$5f6&_}gj.pR Z1O $lʘH 9b w Xp^+R%%B>~VqYap74pM *GBLH\"#*^mMuZG/y"U7$4rd FgԅmuPVAg$wLpVX:÷~ 4[x]\ّj1^?"N# ~Ѯ(Uf5,(Hȧ Q;R#\"^lY*d΀9~^ً)U |,8  r6\W*UY$1#Ц гLO6 x*-10'Ipxc{44}q'9&pT1W3qU&˨MdPyb!r@A B) ±dG i{%rhR\aVA~*qgC.+"1i?#:i9q)C\#.dR z :+M]*:"dI-r<#, )#'[le\)FñNrW ^[*k85 (  +f\8u[l$[ikU$o~~ɬ?\垔xyܵ ~67;ZeuZcn`Mʏe -_&^F1N+!"t@N}wSi"r-彋X=tWC7hmM#(u% lQa}P; yz"JK M5ߺ kE/6R;i;vKLj4^l/TJ/a/_AE3Y~(kWA |W<tn#Vl XG3?AP$|!ʀjk?|[yNٕ=y|Zd$xh{\&([Ѩ&{%꩑ ^F~g3Bre3_-8u=+8҃AE`@˧F[U?ms 1;͏}JV2^s*>|SggAz 4|Dзn:)ϼ ERI%R%@} G< {3 3E hvT;E3(iM*1P*`Mm9y{.-KP<+UI Jb E}ykϭ/$bcl6Ue9*9$s)qtʭعJ"rޡ  ->5Lb$d )o.7lM`!%):&!г1"Ј[v 跸fV\9`"~EWʔV խ>֍Lv;,wCݸԜCև/wv!)_3{1&#|N<%ϳ%\)vږd"2OQL&v#p}#j8G*jej-~:0=]1ew9Y*/{lxץR0- =d);R(gg,GΨ,ZÙl%3e`(6nxET#Qnx1㾷j nF o1z$t90Fc$ZKFoĻQN{W,j*^8Y(yI'3+ɘ./ݲZ-[x<ů* hExLgRS5ݹ0u. u5≫IXvq\|{hom f yCGwC&8H 6$ #@ @)'AaFJ$"C '@ܷ8Cm?o/f6)BU?qrC]*;n謄 +n$kyQ9w,#I9y@k@N {@ӣA>"1T%@>;~{Աȅ?:<|WX-AEi2>mEvZ/@55Yrog:&H98Uj8^}I QA h$dC1aرjwGjkȵ)U=O{H97V+% mlysWǽk˺K& (<ʄ^LTؘtJ"I}Lj|,|º^(@ RHl'mˬT`RIlVzL5VzV JAr +UfkSVzVZv)?Bf:&c4>TJ乍8 $ܲ4)2D-W{H 3F\Qʁ 9y5&9 &n4a]jBeDt]QpeD#Li{lh Pt2 J!".b &g,-\28% NHYdit,Ɔv@cٵ):3al/w-A<kVD1!K8.ͮ\spVʡJ9d}s&aʬt-@u*pV*T|ET ZPeVڨՉ[)z|YaVq !VzV* Ts5\J]*:i}2#/|^km /uQBJ{H>M+rOS/qM#Yʢ)FK 8~c6 "B&3;RSԔc Ab+1ovgu+kVrho RHV۞2+jIVUfszV Ja\/Rlzg˜EX)bfPxӶR*K3MQEX)ƥkƝ_4?^Ki VхR^Ai:(u"-ӱ&hd+`[ӯ"E7JPt5$6ϭ zLasIG(R F9{TЖY,hCTrGj(zKӱL#K20\HJ$l\m@KvN#Jb=N?.:J-}Xt#H(C\RA k(܊3-™ϙ5un&!y͠oGk8S/C~CHߔ]×x/o/3>$O7\Kkޛ Yػ^<OX3|o>$)b>SQrPWS *&u|6g=VE#3x]o xa!$\N."? >-g5:9׭cmB Elw1۟'7 _fWR-q=׊ s(>Jv@W`s( u0ZJz)E92 ٖzޥ xMu8 5Z;oGXB#8^+ʧH U^F'ǭ W[ mI l܁H-{OV!D8_w_NcUqڊun>k\+z6%kuonߗcPP׃w>:5PH3ro7.ѰD6G].њ|(S^2!HA64^y B`xP")5̳UF<^#Ӳ"^b) }$d೺,Pn3 x !}V/ckMNG5qZe`h86CX(&Bcg`I#6Eb*I6M|h5CZ1a ^r['mC@bȹ`JR{B$CW!:V`Gg2\7n}0R1UCRKI"Y R nb._=יiVEʤh5L}"<{Zeެ##5}"&ɀާWBS&_Mcޗjp!@a,ws֙EMKPg_%JӾ6tkc}G;_ (^7&@tn8ʎ UÂ/:C.P pA|GCosQEؤ/~EI6=\2۸ȉNKeZ${n~4tkAH/O~hP͕B"qLh$\›Oa%k %{ LnO9PFBzZINfoӋ&.+Of24J.ZhT@ipσ==_l{=||4Cݺ.=Xjnӓkq4a5"<)zw9Wtً _g7:饭;l}ny+PkvaiҶۈ'X* !Fs Хuc:=Ba%cS =xB,l>iƎԅ(WJp)36[?F1M//>}~}/oo~po_*,yÛw?E SrDGW?O븬7*hS+rh&J^Rqjh=R Zx+Y[-زj[K Jb`p8¯.ܝtgiQo"r~9/ GgJ #ѕ䯢`rEgȵJbdf_E9rJ\|_ؔ=)wAS5S !^=Mֆ!܊0$[ec{]h+4K4'%r(V Aj9HyM^>cC&ЍR9a A2Nsh4Z?xE|{z~]݋{5S2\JgJ QhjlyJqŴAwd2}%6X"06\C2Hk,&4qmXZ"k=2 ?{kET!%JLj:Fg #À.V84ڃUF0Yab=JAJ玂,Z8Qۥ@*]r&hE$#V̰NS6Z,B|%BK&w$jkruZ~9́o[=&4i iQc>ȟ6Ow {+2rvUa} GJʫ_/2jqW ^Y0(Ug+=W#Cv;$4! %@gk!I2rBQf'ѐZ!Sk.Y9z vgVCn )VzF6i$[TC~5{_JrGZV9˚I}LjB~.k:&*|)geD:ҔJxͅr1`'t,Aa"O+c:0dM.1mH=N5f:l~萭: :V2;ݪ*8¡ܮ3VIs* IoMީ&hPԧo@^4+7@-LqaF0K"zb@hRb:R:ʙ\Hቘ봅LIj}.&5<=1aFA6zNQH '@HIu5}(FsQAjpICIxn:tIޕ57r#,) b/mάO8n,7̻q3z§+S-,  ~VJ : +g9hm \Vu҆w%,Vi+S .JJVvg R}CT8g|VzՊ XmTY)N% j?+uT+^"R$3IVðSTߕFVzVך9Da~[ctVWV:K?+ͩp|4ZY)b 5at1L5Y:qz|̬VSLRd3N? _:|#FOp |Q$kଛuhw˫$ &D8N"=enQLYf`\- VmOůU(w|ԺCM77'? 5g#HG`kPmBqv5&R4#!6 =N oZ=iF7Bp;cИE%kGwß Af.yiN42d Dci< LU&$ ̔A$"P_SFc/k4Eߑ`$Ө#mj!aVҔtE EHt OZ@DZ5=e3|?{ AC{xd=~kv%>@3ȷ!jM]/•-"pH;`H~ kQ0ןBz$Re1޶jQ\m?Ȇu}&o@xp@;hCOu{COU "ݪ$ }q [0{=1õe׽_R' nK=<7 dmiU~ .9NՅ̓=fqBn,:ȧEZC A_)ıqLk6D)xXDD|Е3}weVTߕVD<Sk8+֒i+SM5"wjD"P6Z%Z)@Q87dhifdIMIIh$5$߸P169YjIu2BQ bn17X2R~`[7g!͛v8s"8,45YX_6_WGA4]I>ԳmrKd3&+Vlʹ]$} {S՘+М;K$]~6'gC Z24Nh6e:*Vz d41&\M$ >*TvigVi@X[i7skr K9|LJ{{yڒRR5(m퇽ǵv`l;@DZ48i~^Zp44r&ݢpܵ~rVBuTN[fHydk2H=r1 ?]O=¡g+Ql~?']-H?+Q JOQ}WZFhms$"1_J1G"R|mLY)S3Ux"~ZDvY|4=|)̞"L,'`*l0ovbsJC1zf ;ND$JlƧ>`<\Dm&l&xNsrΗvD)RtxXLbDQؠ0$JOx5H  ](;މ8ʹznV ϥs߃Z$ӵ>zBvR3-݌SUL({_٫3U5P`&%!&LP2ɠɌ.n0bRH똻cLOOPMyF FdF*idl˘Dǻ䪉@ ܽt~*+]A#WЅJZ+R)},FVTpab2c׏LiIX/%R88VRoJviuϓM4VBnemGL,-yyx趢G!?adrA ξ-jE4;}]N&-)'>IFJce,Τ:e26f%Vic%o$O3l:?!T0|Bf2D!`+ɔMbJDf9KR֭D6$3C  y ʂ=6VaOf|>RDQcg`8 9tjڢ/jԬg5ی\hԬFaR\Pj_5JN8Pi\W&?%p:nj"+|wx?[lx?nMF<1S)q<.NFj&~\9H*NbP8w2~7y-7Ѭ/z0IөQEM'ҬTI7^|Ц@H:YTٱnBq ) |wsun22wTnmXJTy_ր|&bS3=撚!xPN;x@RR-ӻ5a!߸ֶ)F_8eۻU++ D#zvrzӪfTsք#P0EлV_GK2Oz;M]jF6 nfke6:A~Xȅ]Oӧ{8q[ 5Z^/nzKS<ͱD DOS}WZVH%NQP~V 3PRP~V 9䣕^2>BYVГj)R7ڻWCA;E^ZB,g(S8N(L$6݃KL̒Q*,NmwPg(a S(i:bn3`7|LɃͿVO.?_V6ڴ4?O59#(핤h1Y'O&ʃ^[*&{^X3ӯ!B.wCg[g2W/[6>yKfXoUՅ n܌ZQ^S.$K&r ,5\Y|qmR T2FcUR[%ou!J)B:LP WBءRdX 977xXxfG 0Mz`f _X/nTa,A%S9F)rtPyQ >?׸%q;r'b‘ה-Oؼs(ʵ::}ӄVÁV|"lgMz%Aܿ1oJۆ63^ D.mi$VΔPC2ǚJ'Zdq ML ҈hȤX 3FQIS8IESIeBUr}!z M(D%.O{Xʨ$RQ t1&i%h@(Lwν'C)C=Ӕ0m"EbDjQT$N8bH$,H Mh34 ˙,?iw0S)5BJ; 2fQb1J!6L,D\(=&Cx88?kRi}xUx._!jg.4-9V/ӊ+ǛJ!BWv<ݪ/מW~>% x/*%7g.儉!mj (%39cDH9?{|*-~fVѹ-ymּpܜ?Lk}w,zVRnrj,yaKd,Wk\ =*?{* F3 dVIT"c)h 4xpp܊z}ۺH#Ah>j#wqU06 WtH$%1u`98#9=kAzҬQBfv=6*J%j8.Ԃ0) #:R̨]q9=Y'TfR jJ8i)n) kIT$*@ws S] "HD¤"k(`_ db=k:!"V|]<|O ~y]J~5'd:%:BgƱfo..u!T,vb{IK_f~y簘2<ߝlV}Qޟz|AGgݧy+1dY4=]`Gd_7`[a{|W7{^XiT G˵I+=E+b_J酕r "~p VzV *J!DrAcZ)8+ͥF·)[)ˉ{!EUjZ:X)ZiTNT.=I +ʉZH(%/=m+;gqC/*R+2OJ!rƇ<ۄ +u cC4T!B_wZKVzzVʽ*c4w˴qR_TL ӶR㬔3޷ZR㬔3dC4*W?o ~ +TdSRqV=^X6/*R/^.t҈HD=u,M2`Lg0knAK똴RcʐS %qKW)$Q݄)6aKߟ"|p|9͜j.xA\^:3n2?pLJ q9[уj4/}(y4iek^zOD#ӰW7/U#hY)$p实X΃{On>JrubZJV{#(YF3tT* ql,0HSBO^|If/.ū_\^<3*=ZqR pG%$vȬI:΂ 7g%E\i݄:3Yμ.ȠJiTC9V˙E.8Zt>e0虳'{7Jsٛn*)嘚AB w}w1/%wۋ;+ ext]lRƅc)!v& ).joxñz K1/Y^rHL`}˛v]ο*9NE+O?yxS=זRzi+Djus;fZʔZL3xuzN`u422Rrm$H$IƎюcX & $XSJ b$6$ ̔2N F%MVJFۘh^HDX;nWH1 ROqiN RN+t% #RZI1d&\Yg7_!~貙/[z9@yDG˒yNsR\]E @L8ǹWuD XV+Sd*A x\wT?jc򉒂iԻokAZeX.84W@O:eXxc+3,T1GUdT1K/ [? 4`CkSlt({¯ h\LR#L@1.1r 8BcǮ"R^1c\RQZmm;MZS*ՒFrRBgԢέM MATF1{jc?Yy`"r5Z;/3%lWUj=YwropDY,Z\٩oR@Zkݛ:YI j S#ޗ7xDbWb1ܻ/#w%"QHEƨfdojAM܇OoPGoo?$xMp7sf 4驖z{䇛oW?=|f=#v|?o2-Eþ{zp8V:ER/{~np~FRR<~3Y.49ˆӔJ< U!Of>,)I_wanA?_oCZsa$π ]iC&]RV2K ᘽ\Ktp lߎ/D?7ڱ"pYkrl ܖHL@$Q~y1wRoLV;򓰷áNjn>s13pބn|}>ZXVB:~xtmᜠE7/SZab8t| s.}51}ܰx]91В={>(1S4Sm=uS#ZTP'M񢯀iwM-h`u!/IB}ĹcݐQXTP'M1Wnu#[ ym)ADf/te=wݧizpΎFeOo.^+_>*l)DmϿZ}{{ՋWVfMlfA|עn,ΧĹRФ֕ b;_ZNoj\WQo?99 {UL~඘*|nUv7 _kX:R^~&} , ]4v5o>N#% 8x _SQV5JFisDRtmx<8pMBÁtC}) TO!u!\Л>e!A TsxzO[AMpl\*F=FE`||7Cf,/t7C>A(eƈUm<#qhD һ|=܅Lp|zpW/FG. ʁFt 8rIp1=]hTDVMk6 j*==.VxW=5y.yjF )V{3`(" dn"Fp-T2\iËB@huHe{fM sثMԿԞBY 夐 @"ST(hnuΰ((ɍB^NMeBp~ɠyu[VJ%2&/srUhƵ-Z zH`i&;ԒֹLEļaO. D9c9[.)2\*eZTTl\$`&Jd'+3@bRiӽm˭V6nڏQ9:O͊2j3V;Qwa1,ѧ˧G^NJͿӁPt~~*yCU]FJt%=hԧr0,)V~CO^vml!A 7S&Uu*d-˾$#^j 3دA)j/FkE(Z֥Hm^jJA#o7K=C\ Ԋ1OAUo4<1f:}?=C^8E8E t1[Fxj_-[u!/)>߱n@+ŏYg.QQEѨhXg.pl))Ddg:~yCm6.f~]a]\,ݗ8z5_SM楽 [ O6=̖!Mnw73ŧW*Mঽʮ' s7|$3sW<_wz[L[כ%Ԣ|T U|U@ͦcAsdSgb!ɧLYJ|WN^WK+瑹 &1XKN΄oHJ$L#XZxN΄oAFJBՙ <\Ǹ~D#6ns\ڤ/M+X ۏ~Wzs[,P7 $!d.-rIԙ>f'θ`XRAQ+&r HYX uAGqWc^OjEvapjg fE5Fb%joo$c $M,W{pag7_.8 祗a|)ihBvHj <,Y7MZfyf5%,tOEDPB $ ǂK5?$M $/?"uLP:1^NݼvDw)WaBbʨ_l 'x;P G㙒D ,FK>Eo4L.*CjPYRzDUn%뭍S9&D"4dV[,ZrDz/ok|6a;hK0*=z߼әМ77* qkO[,;ٯۯno2M81~ϿbdA̒k9 8Z|S~^PL툞=A: \3N. ANG @9RQn{Cs??~`0fXiG'17yN/&ksY(e\Тmr7l9Ybȧ`O -ݓcD+ơgaNv^6#cuҬuNZi6*Sڛ?}8y0o|T8Ⱦ!(1C9cvK DtR#F󴕾Zj>$䕋hLu7=X1 햊A~GvA<0vK]W.;2X )+$TqK_7 ^9s۫կ}h§" կoϯNWxBP$x;q:c]CŽӋ=7ҟj% Z\D|eߝ\ޕnJ?*?ѯ1'9^{٬NŪZl?qrWySշ7^7򻼬elL-`65z{!Y [ |ǂ_~870=ݺf>!}{qacOnX~]* -᎝^X-Ou5f5S0ĀN4%% Ӷ>Ę IۓkPӓC/*kym%E"XR- a 5큊P;wAZ!&=jƸEkkuu;^M 4E{nGk]9Ly CJ:SS+QFN"}S٢4dFYTck"߆뚌Ů 4vJX\Paͅ8Hu%5whyJTR9+rMb!R?KIš@ yЊh jS\VWy+t,3F٢`kQc+K*?<[IaMeR`+9%t׫Ӛᵺ!Vp۪\D5( dӐrց4 ̕5*GRK$t^Zݹ)?W y=a8ZQwv[7Ĩ!g"ra4 FFڤ%A0L2NL>jgRiE}ʼ%t5'a5>R1^<xJΥ >iBaR;(H SW€(yU0h9Xn4Y7,4xU w#ۂQh+^-]t 6iy޻K*^s\P"ߖ’XPKQ2]QJ#C7΍Isi7w0ށӚ %bAKdMĨylMFҲ[hR='%3̟VJM88RìwMj`#ݛsa|Eqī0XmF|9uEGUa,FЋfK{C3ff]r̺(ԭ.pU#^? ]C`D c\9~ _% %fh&'84G9M>zOnZsVpÜ,G|-@xkɇG~2z|Q-0m3 -z8Ee:qx|ɇQ#F;î`I"+=tq(nyƇf%]!20}$qc2Sm0u;^˱B93J#0S@ِ䉮ciR;tZz/(>ԭs!v,%b4I*;Mf{XYsnNwh 1O_8 n}H+$BH(D %~G(A֍G9& ؃rݑ[6 3H9P(v ɰ`S'Cްw$-*" ێX Ŕ&_x4 Ov*Y(^ bІgB+`NEs];)uo-t=s(;TYpvb8^f7FAW- u,f .]?=:W9hG}K,CO (w" @z/ 9a9hWe0@ C 8ݑ- CcZɇU\w?]LwS *GЋX7Hn(Cu`09v>yÁؽL 85uؔ)6{J&49#3qG|X~xK? c}s-[OމS2Y䤬&J%}-.oWW7Gߺ_WcNR?S9^7 y4?[ʫMXكq}{㍖]^M ȶU/g~cown~ ;h=Ś$SlYK:4iR/I1K0g)VQnd7nnV*qh $B1Ԅ!]($ӝA5V@5C^a\/K5scE3jW-6/a9CseI|ȣEݬ̓Cد6g[ 诋ϧggpݻ#g9әhydʣj:(-D?HԚdeR YDoY&+Dm='Bqʪc,J Wȭv wJUjD͒<0YVj&vWG/Ard>@rUƘF=ZAA-3RV- :. yPp:U<\2 82T$8A*XY3E Δjh@Ġ Ӓg>.pr_Fg0sMZmdO [ҿVqahu!q>%XBˉ:\0UN`bHBP+Ե3+dhM? ES*R#vؑ(J>@Qd,JVi *qʫZV,+ kK( ]"WP@1eS v#*> vb! mWM ~€WESU2 d5#8px>)ōu北wK5<s:LF(820nc.h+a'CFj. Щ:{|̅f݋ =F+l7i7l'@I_Sy( ֌-E@h=qo̰Y9R@mpX7>P紬afVfR֌*fsa|*ᑚq \wln\OF/80jf/x\\8W>k:W1mbtnW7hP@5a2s3/ s9BӱUQ;j 8iڡX8rݤ%9@5*i47ȖaFs;TC (.zn4=h. :var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005512636115134251545017713 0ustar rootrootJan 21 21:08:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 21:08:26 crc restorecon[4698]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:26 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 21:08:27 crc restorecon[4698]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 21:08:28 crc kubenswrapper[4860]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.383415 4860 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385914 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385947 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385952 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385956 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385960 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385964 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385968 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385972 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385976 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385979 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385983 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385986 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385990 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385993 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.385998 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386002 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386007 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386013 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386017 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386021 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386025 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386029 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386033 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386037 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386040 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386044 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386047 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386051 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386054 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386058 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386062 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386066 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386069 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386073 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386076 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386080 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386085 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386090 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386095 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386099 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386103 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386107 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386110 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386114 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386118 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386122 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386127 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386131 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386135 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386140 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386145 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386149 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386153 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386158 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386163 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386167 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386171 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386175 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386178 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386182 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386187 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386190 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386193 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386197 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386200 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386204 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386208 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386213 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386216 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386221 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.386224 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386601 4860 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386613 4860 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386632 4860 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386647 4860 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386653 4860 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386657 4860 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386663 4860 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386675 4860 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386680 4860 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386684 4860 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386689 4860 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386693 4860 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386697 4860 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386701 4860 flags.go:64] FLAG: --cgroup-root="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386705 4860 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386709 4860 flags.go:64] FLAG: --client-ca-file="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386713 4860 flags.go:64] FLAG: --cloud-config="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386717 4860 flags.go:64] FLAG: --cloud-provider="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386720 4860 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386725 4860 flags.go:64] FLAG: --cluster-domain="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386729 4860 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386734 4860 flags.go:64] FLAG: --config-dir="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386738 4860 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386743 4860 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386748 4860 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386753 4860 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386757 4860 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386761 4860 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386766 4860 flags.go:64] FLAG: --contention-profiling="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386770 4860 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386774 4860 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386779 4860 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386783 4860 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386788 4860 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386792 4860 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386796 4860 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386800 4860 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386804 4860 flags.go:64] FLAG: --enable-server="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386808 4860 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386813 4860 flags.go:64] FLAG: --event-burst="100" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386818 4860 flags.go:64] FLAG: --event-qps="50" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386821 4860 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386825 4860 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386830 4860 flags.go:64] FLAG: --eviction-hard="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386835 4860 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386839 4860 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386843 4860 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386847 4860 flags.go:64] FLAG: --eviction-soft="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386851 4860 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386855 4860 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386860 4860 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386864 4860 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386868 4860 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386872 4860 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386876 4860 flags.go:64] FLAG: --feature-gates="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386881 4860 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386885 4860 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386889 4860 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386893 4860 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386897 4860 flags.go:64] FLAG: --healthz-port="10248" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386902 4860 flags.go:64] FLAG: --help="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386906 4860 flags.go:64] FLAG: --hostname-override="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386910 4860 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386914 4860 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386918 4860 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386922 4860 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386944 4860 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386950 4860 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386955 4860 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386960 4860 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386965 4860 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386972 4860 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386978 4860 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386982 4860 flags.go:64] FLAG: --kube-reserved="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386987 4860 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386991 4860 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386995 4860 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.386999 4860 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387004 4860 flags.go:64] FLAG: --lock-file="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387008 4860 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387012 4860 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387016 4860 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387023 4860 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387027 4860 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387031 4860 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387035 4860 flags.go:64] FLAG: --logging-format="text" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387039 4860 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387043 4860 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387047 4860 flags.go:64] FLAG: --manifest-url="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387051 4860 flags.go:64] FLAG: --manifest-url-header="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387057 4860 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387061 4860 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387067 4860 flags.go:64] FLAG: --max-pods="110" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387071 4860 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387075 4860 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387079 4860 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387083 4860 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387088 4860 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387092 4860 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387096 4860 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387105 4860 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387109 4860 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387113 4860 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387118 4860 flags.go:64] FLAG: --pod-cidr="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387122 4860 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387129 4860 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387132 4860 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387138 4860 flags.go:64] FLAG: --pods-per-core="0" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387143 4860 flags.go:64] FLAG: --port="10250" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387148 4860 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387153 4860 flags.go:64] FLAG: --provider-id="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387158 4860 flags.go:64] FLAG: --qos-reserved="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387162 4860 flags.go:64] FLAG: --read-only-port="10255" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387166 4860 flags.go:64] FLAG: --register-node="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387170 4860 flags.go:64] FLAG: --register-schedulable="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387174 4860 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387181 4860 flags.go:64] FLAG: --registry-burst="10" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387185 4860 flags.go:64] FLAG: --registry-qps="5" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387189 4860 flags.go:64] FLAG: --reserved-cpus="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387193 4860 flags.go:64] FLAG: --reserved-memory="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387199 4860 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387203 4860 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387207 4860 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387212 4860 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387216 4860 flags.go:64] FLAG: --runonce="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387221 4860 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387225 4860 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387230 4860 flags.go:64] FLAG: --seccomp-default="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387234 4860 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387238 4860 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387242 4860 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387246 4860 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387250 4860 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387254 4860 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387258 4860 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387262 4860 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387266 4860 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387270 4860 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387274 4860 flags.go:64] FLAG: --system-cgroups="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387279 4860 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387286 4860 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387290 4860 flags.go:64] FLAG: --tls-cert-file="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387294 4860 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387298 4860 flags.go:64] FLAG: --tls-min-version="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387302 4860 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387306 4860 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387310 4860 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387314 4860 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387318 4860 flags.go:64] FLAG: --v="2" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387324 4860 flags.go:64] FLAG: --version="false" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387329 4860 flags.go:64] FLAG: --vmodule="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387333 4860 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.387338 4860 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387635 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387644 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387649 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387653 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387662 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387666 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387671 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387675 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387679 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387683 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387687 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387690 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387694 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387698 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387702 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387706 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387709 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387713 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387716 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387721 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387725 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387730 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387735 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387739 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387743 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387747 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387750 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387753 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387758 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387762 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387766 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387771 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387775 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387779 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387784 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387788 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387797 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387802 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387806 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387811 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387815 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387819 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387825 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387829 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387835 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387841 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387848 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387852 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387856 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387860 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387865 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387869 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387873 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387877 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387880 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387884 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387888 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387891 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387895 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387899 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387903 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387907 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387910 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387913 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387917 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387920 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387924 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387943 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387967 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387971 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.387975 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.388165 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.407212 4860 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.407265 4860 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407334 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407342 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407348 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407352 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407356 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407360 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407364 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407368 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407372 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407375 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407379 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407383 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407387 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407391 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407395 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407399 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407402 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407406 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407410 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407413 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407417 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407420 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407424 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407428 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407431 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407435 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407438 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407442 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407446 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407450 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407460 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407465 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407469 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407477 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407481 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407484 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407488 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407492 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407498 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407504 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407508 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407513 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407517 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407521 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407525 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407529 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407533 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407536 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407541 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407544 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407548 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407551 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407555 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407560 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407565 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407569 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407573 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407577 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407580 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407586 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407591 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407596 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407600 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407604 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407609 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407616 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407621 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407626 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407631 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407637 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407641 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.407650 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407777 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407788 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407795 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407802 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407808 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407813 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407818 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407823 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407827 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407832 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407835 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407839 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407843 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407847 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407851 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407855 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407858 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407861 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407865 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407869 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407873 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407879 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407882 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407886 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407890 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407894 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407898 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407902 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407905 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407909 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407913 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407917 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407921 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407925 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407929 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407947 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407951 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407955 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407959 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407963 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407967 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407971 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407975 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407979 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407984 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407987 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407991 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407994 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.407998 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408002 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408005 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408009 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408013 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408016 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408021 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408025 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408028 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408032 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408036 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408039 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408044 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408055 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408060 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408064 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408068 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408072 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408077 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408081 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408085 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408088 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.408092 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.408099 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.408286 4860 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.424046 4860 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.424245 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.425019 4860 server.go:997] "Starting client certificate rotation" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.425062 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.425354 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-11 12:06:25.045231512 +0000 UTC Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.425461 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.434264 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.435956 4860 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.438477 4860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.445383 4860 log.go:25] "Validated CRI v1 runtime API" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.459476 4860 log.go:25] "Validated CRI v1 image API" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.462021 4860 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.466853 4860 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-21-03-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.466999 4860 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.488679 4860 manager.go:217] Machine: {Timestamp:2026-01-21 21:08:28.486962833 +0000 UTC m=+0.709141323 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5b1ad41e-3342-4aef-8a8f-31edafe270ff BootID:148647ae-8206-4b09-9045-f550cec0b288 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ba:37:55 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ba:37:55 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:31:d6:7e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a1:82:ac Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ec:2c:da Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d1:45:d6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:29:a3:73:a6:a2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:19:02:65:74:88 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.489056 4860 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.489325 4860 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.490052 4860 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.490357 4860 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.490444 4860 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.490726 4860 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.490742 4860 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.491038 4860 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.491089 4860 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.491294 4860 state_mem.go:36] "Initialized new in-memory state store" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.491729 4860 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.492693 4860 kubelet.go:418] "Attempting to sync node with API server" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.492717 4860 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.492751 4860 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.492763 4860 kubelet.go:324] "Adding apiserver pod source" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.492780 4860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.494750 4860 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.495216 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.497582 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.497695 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.497657 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.497775 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.498727 4860 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499305 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499332 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499344 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499353 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499366 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499372 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499382 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499394 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499404 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499413 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499449 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499459 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.499695 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.500215 4860 server.go:1280] "Started kubelet" Jan 21 21:08:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.507710 4860 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.507994 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.507714 4860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.509025 4860 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.509886 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.509488 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cdb25ac442448 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 21:08:28.500182088 +0000 UTC m=+0.722360558,LastTimestamp:2026-01-21 21:08:28.500182088 +0000 UTC m=+0.722360558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.510919 4860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.511273 4860 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.511293 4860 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.511430 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.511987 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:23:58.667975081 +0000 UTC Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.512223 4860 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.514453 4860 server.go:460] "Adding debug handlers to kubelet server" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.518143 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.518494 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.519648 4860 factory.go:55] Registering systemd factory Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.519688 4860 factory.go:221] Registration of the systemd container factory successfully Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.518290 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.521986 4860 factory.go:153] Registering CRI-O factory Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.522026 4860 factory.go:221] Registration of the crio container factory successfully Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.522123 4860 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.522153 4860 factory.go:103] Registering Raw factory Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.522173 4860 manager.go:1196] Started watching for new ooms in manager Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.522880 4860 manager.go:319] Starting recovery of all containers Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524515 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524563 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524580 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524619 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524640 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524652 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524664 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524677 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524692 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524704 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524715 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524728 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524743 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524759 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524773 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524784 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524822 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524835 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524847 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524859 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524871 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524882 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524896 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524913 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524947 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524961 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524976 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.524988 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525007 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525021 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525036 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525050 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525063 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525076 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525088 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525100 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525113 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525125 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525137 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525148 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525160 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525172 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525184 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525196 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525210 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525223 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525237 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525251 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525264 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525276 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525289 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525301 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525320 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525335 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525349 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525363 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525376 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525390 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525403 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525416 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525428 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525455 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525477 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525492 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525513 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525527 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525546 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525560 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525581 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525600 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525614 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525626 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525638 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525651 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525662 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525675 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525695 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525706 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525719 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525737 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525753 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525765 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525778 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525790 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525802 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525815 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525827 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525839 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525852 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525864 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525877 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525889 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525902 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525915 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525959 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525973 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.525997 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526011 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526024 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526037 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526115 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526138 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526158 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526178 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526218 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526234 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526257 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526271 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526289 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526304 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526326 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526345 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526366 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526379 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526392 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526406 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526421 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526434 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526445 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526468 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526490 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526504 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526518 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526533 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526547 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526561 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526574 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526594 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526615 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526634 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526655 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526674 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526694 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526713 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526732 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526750 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526762 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526780 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526793 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526806 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526818 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526832 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526843 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526855 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526870 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526883 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526959 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526976 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526988 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.526999 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527011 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527025 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527042 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527054 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527067 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527079 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527092 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527105 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527116 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527128 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527142 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527154 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527904 4860 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527976 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.527996 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528009 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528025 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528039 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528051 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528065 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528082 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528096 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528111 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528122 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528135 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528149 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528165 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528179 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528192 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528205 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528219 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528233 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528247 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528296 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528312 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528326 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528341 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528354 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528367 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528379 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528391 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528404 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528418 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528430 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528444 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528459 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528474 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528486 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528501 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528513 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528525 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528539 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528552 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528565 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528579 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528591 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528605 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528620 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528641 4860 reconstruct.go:97] "Volume reconstruction finished" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.528652 4860 reconciler.go:26] "Reconciler: start to sync state" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.544897 4860 manager.go:324] Recovery completed Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.561173 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.567316 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.567359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.567371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.568107 4860 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.568131 4860 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.568174 4860 state_mem.go:36] "Initialized new in-memory state store" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.575731 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.575786 4860 policy_none.go:49] "None policy: Start" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.576680 4860 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.576723 4860 state_mem.go:35] "Initializing new in-memory state store" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.577444 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.577506 4860 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.577544 4860 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.577591 4860 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 21:08:28 crc kubenswrapper[4860]: W0121 21:08:28.578949 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.579002 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.611596 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648024 4860 manager.go:334] "Starting Device Plugin manager" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648081 4860 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648095 4860 server.go:79] "Starting device plugin registration server" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648551 4860 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648573 4860 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.648897 4860 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.649011 4860 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.649022 4860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.655235 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.678532 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.678708 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.679886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.679924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.679962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.680205 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.680626 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.680717 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681270 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681522 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681636 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.681843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682056 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682305 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682435 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.682472 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.683824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684002 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684097 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684131 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.684996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685128 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685245 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.685999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.686030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.686043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.719864 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.730993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731040 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731089 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731169 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731251 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731298 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731362 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731428 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731488 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731518 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731535 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731565 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731603 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.731631 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.749069 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.750508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.750564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.750584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.750638 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.751686 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833593 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833681 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833716 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833738 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833757 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833779 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833801 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833821 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833864 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833917 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833976 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833981 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.833997 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834083 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834050 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834449 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834410 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834525 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834151 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834649 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834702 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834747 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834759 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834762 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.834779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.952641 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.954107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.954165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.954376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:28 crc kubenswrapper[4860]: I0121 21:08:28.954409 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:28 crc kubenswrapper[4860]: E0121 21:08:28.955014 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.026127 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.044191 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.048381 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a383f763192f0063cbf6f5c9c0aadafe4c9637d7400b8b0e2aa7bd5e79f1b924 WatchSource:0}: Error finding container a383f763192f0063cbf6f5c9c0aadafe4c9637d7400b8b0e2aa7bd5e79f1b924: Status 404 returned error can't find the container with id a383f763192f0063cbf6f5c9c0aadafe4c9637d7400b8b0e2aa7bd5e79f1b924 Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.055124 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.064275 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4fcf346c17bfcf4a8ab3cc762c6ae6acb2e220844060179614537562c987d36b WatchSource:0}: Error finding container 4fcf346c17bfcf4a8ab3cc762c6ae6acb2e220844060179614537562c987d36b: Status 404 returned error can't find the container with id 4fcf346c17bfcf4a8ab3cc762c6ae6acb2e220844060179614537562c987d36b Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.140731 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.141601 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.142181 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.153773 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f73d1195eeefda76f1ff7acf6dabeee5d621f3aca299edebc494a9efecebcd69 WatchSource:0}: Error finding container f73d1195eeefda76f1ff7acf6dabeee5d621f3aca299edebc494a9efecebcd69: Status 404 returned error can't find the container with id f73d1195eeefda76f1ff7acf6dabeee5d621f3aca299edebc494a9efecebcd69 Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.163208 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e74dab50849b0a211c2a87961aa85df2f81e4e95f082c0d744740706552dd2b1 WatchSource:0}: Error finding container e74dab50849b0a211c2a87961aa85df2f81e4e95f082c0d744740706552dd2b1: Status 404 returned error can't find the container with id e74dab50849b0a211c2a87961aa85df2f81e4e95f082c0d744740706552dd2b1 Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.294349 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-629e6304fc6052d4cd3eb9997e3645de41cfed251506d244317629400484c153 WatchSource:0}: Error finding container 629e6304fc6052d4cd3eb9997e3645de41cfed251506d244317629400484c153: Status 404 returned error can't find the container with id 629e6304fc6052d4cd3eb9997e3645de41cfed251506d244317629400484c153 Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.355224 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.355461 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.355768 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.356761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.356791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.356800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.356828 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.357154 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.361667 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.361771 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.450926 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.451031 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.512228 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:20:32.0017662 +0000 UTC Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.512477 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.582058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f73d1195eeefda76f1ff7acf6dabeee5d621f3aca299edebc494a9efecebcd69"} Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.583035 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4fcf346c17bfcf4a8ab3cc762c6ae6acb2e220844060179614537562c987d36b"} Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.583802 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a383f763192f0063cbf6f5c9c0aadafe4c9637d7400b8b0e2aa7bd5e79f1b924"} Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.584576 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"629e6304fc6052d4cd3eb9997e3645de41cfed251506d244317629400484c153"} Jan 21 21:08:29 crc kubenswrapper[4860]: I0121 21:08:29.585256 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e74dab50849b0a211c2a87961aa85df2f81e4e95f082c0d744740706552dd2b1"} Jan 21 21:08:29 crc kubenswrapper[4860]: W0121 21:08:29.694895 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.694977 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:29 crc kubenswrapper[4860]: E0121 21:08:29.943909 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.158015 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.159686 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.159719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.159728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.159749 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:30 crc kubenswrapper[4860]: E0121 21:08:30.160365 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.509265 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.563604 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 21:08:30 crc kubenswrapper[4860]: E0121 21:08:30.564857 4860 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.568741 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:17:00.287210959 +0000 UTC Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.607156 4860 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f" exitCode=0 Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.607228 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.607324 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.608299 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.608327 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.608338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.609926 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.610004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.610040 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.611432 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b" exitCode=0 Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.611493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.611556 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.612565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.612597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.612611 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.613993 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039" exitCode=0 Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.614051 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.614087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.615285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.615315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.615325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.618318 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636" exitCode=0 Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.618369 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636"} Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.618449 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.619064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.619160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.619228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.742300 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.744371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.744398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:30 crc kubenswrapper[4860]: I0121 21:08:30.744406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.516642 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:31 crc kubenswrapper[4860]: W0121 21:08:31.526650 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:31 crc kubenswrapper[4860]: E0121 21:08:31.526752 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:31 crc kubenswrapper[4860]: E0121 21:08:31.545263 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.569419 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:50:58.394481302 +0000 UTC Jan 21 21:08:31 crc kubenswrapper[4860]: W0121 21:08:31.596576 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:31 crc kubenswrapper[4860]: E0121 21:08:31.596682 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.622430 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8"} Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.622601 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.623721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.623760 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.623773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.625254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d"} Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.652840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a"} Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.653035 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.654047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.654093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.654112 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.655825 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84"} Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.657820 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e" exitCode=0 Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.657960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e"} Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.658087 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.659320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.659362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.659374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.760582 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.762002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.762063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.762076 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:31 crc kubenswrapper[4860]: I0121 21:08:31.762103 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:31 crc kubenswrapper[4860]: E0121 21:08:31.762711 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Jan 21 21:08:32 crc kubenswrapper[4860]: W0121 21:08:32.042429 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:32 crc kubenswrapper[4860]: E0121 21:08:32.042528 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.213506 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.214277 4860 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.214388 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.509631 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.570215 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:49:22.726350646 +0000 UTC Jan 21 21:08:32 crc kubenswrapper[4860]: W0121 21:08:32.583498 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:32 crc kubenswrapper[4860]: E0121 21:08:32.583694 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.664966 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006"} Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.665498 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1"} Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.665419 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.667066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.667126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.667144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.764989 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680"} Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.765048 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882"} Jan 21 21:08:32 crc kubenswrapper[4860]: E0121 21:08:32.768364 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cdb25ac442448 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 21:08:28.500182088 +0000 UTC m=+0.722360558,LastTimestamp:2026-01-21 21:08:28.500182088 +0000 UTC m=+0.722360558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.768847 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4" exitCode=0 Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.768965 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4"} Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.768955 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.769031 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.769062 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:32 crc kubenswrapper[4860]: I0121 21:08:32.770426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.509225 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.571014 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:45:05.426145171 +0000 UTC Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.774999 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae"} Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.775046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4"} Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.775151 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.776286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.776313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.776322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.778768 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779192 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73"} Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779285 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779595 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.779917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.780502 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.780523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.780535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.813211 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.813400 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 21 21:08:33 crc kubenswrapper[4860]: I0121 21:08:33.813430 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.571830 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:44:52.014906088 +0000 UTC Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785245 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d"} Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6"} Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785308 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf"} Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785319 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3"} Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785336 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785452 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.785976 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786005 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786607 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.786979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.838103 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.935175 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.963499 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.965205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.965268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.965278 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:34 crc kubenswrapper[4860]: I0121 21:08:34.965308 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.572062 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:00:50.226161504 +0000 UTC Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.788556 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.788556 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790436 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:35 crc kubenswrapper[4860]: I0121 21:08:35.790454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:36 crc kubenswrapper[4860]: I0121 21:08:36.573030 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:14:43.138016848 +0000 UTC Jan 21 21:08:36 crc kubenswrapper[4860]: I0121 21:08:36.790430 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:36 crc kubenswrapper[4860]: I0121 21:08:36.791356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:36 crc kubenswrapper[4860]: I0121 21:08:36.791392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:36 crc kubenswrapper[4860]: I0121 21:08:36.791405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.556392 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.556587 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.557840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.557874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.557885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.573368 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:29:21.826879522 +0000 UTC Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.764300 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.793103 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.794178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.794218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:37 crc kubenswrapper[4860]: I0121 21:08:37.794229 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.574245 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:08:22.121497771 +0000 UTC Jan 21 21:08:38 crc kubenswrapper[4860]: E0121 21:08:38.655385 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.942046 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.942253 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.943866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.944045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:38 crc kubenswrapper[4860]: I0121 21:08:38.944087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:39 crc kubenswrapper[4860]: I0121 21:08:39.574841 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:25:17.637062281 +0000 UTC Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.141298 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.141470 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.142756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.142802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.142817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.147202 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.334915 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.342608 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.575545 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:58:45.024815223 +0000 UTC Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.801115 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.802249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.802295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:40 crc kubenswrapper[4860]: I0121 21:08:40.802309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:41 crc kubenswrapper[4860]: I0121 21:08:41.576318 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:52:52.373816333 +0000 UTC Jan 21 21:08:41 crc kubenswrapper[4860]: I0121 21:08:41.803973 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:41 crc kubenswrapper[4860]: I0121 21:08:41.805564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:41 crc kubenswrapper[4860]: I0121 21:08:41.805656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:41 crc kubenswrapper[4860]: I0121 21:08:41.805668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:42 crc kubenswrapper[4860]: I0121 21:08:42.576868 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:42:02.701154727 +0000 UTC Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.536179 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.536489 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.538296 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.538339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.538350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.577429 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:58:28.616991582 +0000 UTC Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.785088 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.813020 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.814569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.814619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.814630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:43 crc kubenswrapper[4860]: I0121 21:08:43.824860 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.509886 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.578564 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:33:14.030318051 +0000 UTC Jan 21 21:08:44 crc kubenswrapper[4860]: E0121 21:08:44.748554 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.822443 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.825510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.825582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:44 crc kubenswrapper[4860]: I0121 21:08:44.825596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:44 crc kubenswrapper[4860]: E0121 21:08:44.937003 4860 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 21:08:44 crc kubenswrapper[4860]: E0121 21:08:44.966551 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.214805 4860 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.214980 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 21:08:45 crc kubenswrapper[4860]: W0121 21:08:45.383904 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.384151 4860 trace.go:236] Trace[521131835]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 21:08:35.381) (total time: 10002ms): Jan 21 21:08:45 crc kubenswrapper[4860]: Trace[521131835]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (21:08:45.383) Jan 21 21:08:45 crc kubenswrapper[4860]: Trace[521131835]: [10.002339955s] [10.002339955s] END Jan 21 21:08:45 crc kubenswrapper[4860]: E0121 21:08:45.384188 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.434802 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.435283 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.442004 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.442096 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 21:08:45 crc kubenswrapper[4860]: I0121 21:08:45.661264 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:35:14.336603446 +0000 UTC Jan 21 21:08:46 crc kubenswrapper[4860]: I0121 21:08:46.667078 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:42:51.80207602 +0000 UTC Jan 21 21:08:47 crc kubenswrapper[4860]: I0121 21:08:47.668363 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:22:28.222343019 +0000 UTC Jan 21 21:08:48 crc kubenswrapper[4860]: E0121 21:08:48.655605 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.669310 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:42:42.21408067 +0000 UTC Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.821159 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.821375 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.822594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.822633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.822645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.826033 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.835506 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.835555 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.836496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.836521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:48 crc kubenswrapper[4860]: I0121 21:08:48.836539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:49 crc kubenswrapper[4860]: I0121 21:08:49.669817 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:43:15.953781631 +0000 UTC Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.419139 4860 trace.go:236] Trace[761240908]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 21:08:35.766) (total time: 14652ms): Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[761240908]: ---"Objects listed" error: 14652ms (21:08:50.418) Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[761240908]: [14.652986952s] [14.652986952s] END Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.419744 4860 trace.go:236] Trace[94442691]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 21:08:36.788) (total time: 13631ms): Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[94442691]: ---"Objects listed" error: 13631ms (21:08:50.419) Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[94442691]: [13.631638326s] [13.631638326s] END Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.419788 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.419738 4860 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.421025 4860 trace.go:236] Trace[177102169]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 21:08:37.291) (total time: 13129ms): Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[177102169]: ---"Objects listed" error: 13129ms (21:08:50.420) Jan 21 21:08:50 crc kubenswrapper[4860]: Trace[177102169]: [13.129605962s] [13.129605962s] END Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.421117 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.421868 4860 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.580272 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50836->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.580345 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50836->192.168.126.11:17697: read: connection reset by peer" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.581235 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.581762 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.670151 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:27:22.405327812 +0000 UTC Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.692546 4860 apiserver.go:52] "Watching apiserver" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.861469 4860 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.862661 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863590 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863643 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863590 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:50 crc kubenswrapper[4860]: E0121 21:08:50.863720 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863786 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863591 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:50 crc kubenswrapper[4860]: E0121 21:08:50.863884 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.863917 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:50 crc kubenswrapper[4860]: E0121 21:08:50.864103 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.866377 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.867090 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.867188 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.867580 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.867680 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.869207 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.869242 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.872366 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.913874 4860 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.920831 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924262 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924305 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924339 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924358 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924420 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924440 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924496 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924555 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924575 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924611 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924635 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924667 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924696 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924769 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924825 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924902 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.924971 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925002 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925023 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925045 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925088 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925111 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925134 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925164 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925193 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925217 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925239 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925270 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925310 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925335 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925375 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925483 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925514 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925537 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925571 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925594 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925622 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925642 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925666 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925689 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925733 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925755 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925780 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925802 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925824 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925884 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925907 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.925975 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926004 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926029 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926055 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926081 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926096 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926096 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926109 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926201 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926262 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926429 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926430 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926474 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926680 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926713 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926737 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926783 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926816 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926849 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926880 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926904 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926926 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.926974 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927007 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927032 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927058 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927081 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927100 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927123 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927149 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927173 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927197 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927220 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927241 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927259 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927291 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927308 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927323 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927341 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927359 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927416 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927433 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927451 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927483 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927515 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927537 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927568 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927599 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927621 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927645 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927678 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927695 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927718 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927735 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927788 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927822 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927845 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927893 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927959 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.927984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928030 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928074 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928099 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928711 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928755 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928782 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928834 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928867 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928895 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928924 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928886 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.928967 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.929284 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.929783 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.930026 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.930139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.930388 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.930670 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931066 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931261 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931619 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931689 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931728 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931816 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931827 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931842 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931852 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931887 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931920 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.931981 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932284 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932343 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932361 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932345 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932386 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932435 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932563 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.933281 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.937752 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.937899 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.938173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.939043 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.939502 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.940366 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.940453 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.941030 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.942013 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.932384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980116 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980161 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980209 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980237 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980258 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980302 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980319 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981257 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981315 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981353 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981396 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981428 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981689 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981955 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982268 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982514 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982871 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983201 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983370 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983751 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983841 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984543 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984603 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984634 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984690 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.985911 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.986018 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.986181 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.986216 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.986241 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.979904 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.980064 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981333 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981478 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981625 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981713 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981814 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.981859 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982040 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982263 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982458 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982495 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.982956 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983364 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983397 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983515 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983509 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983673 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983804 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983833 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983994 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.983995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984291 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984499 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.984506 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.985204 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.985753 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.986296 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.987469 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.988126 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.988606 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.985813 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.989523 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.989669 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.991107 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.991353 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.991713 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.992805 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.992816 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.993298 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.993341 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.993345 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.993380 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.993809 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.995088 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.996389 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.996893 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.996976 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.997058 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.998506 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.998532 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.998526 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.999150 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.999331 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.999794 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:50 crc kubenswrapper[4860]: I0121 21:08:50.999872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.000229 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.000374 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.000736 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.000807 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.001183 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:08:51.501118893 +0000 UTC m=+23.723297423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.001361 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.001575 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.001833 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.002222 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.002708 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.002840 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.004196 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.005159 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.005683 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.006357 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.009504 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.009619 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.009849 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.010358 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.010694 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.010972 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.011287 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.011319 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.011527 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.012074 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.012249 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.012440 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.012716 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.013198 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.013253 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.013774 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.014712 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.014717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.014778 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.014625 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.014873 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015011 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015507 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015544 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015500 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015829 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:50.989540 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.016004 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.015546 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.016398 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.016483 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.016671 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.016845 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.017441 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.017786 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.017985 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.018064 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.018619 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.018832 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.018917 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019276 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019607 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019695 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019729 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.019913 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.020493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021125 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021156 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021215 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021258 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021333 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021526 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021640 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021643 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021656 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021798 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021856 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021882 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.021908 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.022044 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.022072 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.022068 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.022523 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.022533 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028338 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028403 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028431 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028458 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028499 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028525 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028570 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028693 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028730 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028789 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028872 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.028959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029079 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029107 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029137 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029193 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029338 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029353 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029370 4860 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029383 4860 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029397 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029412 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029428 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029441 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029455 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029470 4860 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029482 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029496 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029517 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029531 4860 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029552 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029573 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029585 4860 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029604 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029617 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029631 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029649 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029662 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029676 4860 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029696 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029710 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029722 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029741 4860 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029769 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029789 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029803 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029815 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029828 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029848 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029860 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029872 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029885 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029897 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029911 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029924 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029963 4860 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.029984 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030002 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030015 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030030 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030042 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030055 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030068 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030081 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030095 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030110 4860 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030123 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030136 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030149 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030161 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030174 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030186 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030200 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030213 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030226 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030251 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030270 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030282 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030294 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030307 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030319 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030331 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030346 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030361 4860 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030375 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030387 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030400 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030421 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030440 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030453 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030467 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030486 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030499 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030512 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030538 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030560 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030578 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030599 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030611 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030624 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030635 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030648 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030661 4860 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030674 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030687 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030706 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030721 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030744 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030762 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030774 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030787 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030801 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030814 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030829 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030842 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030854 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030866 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030885 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030897 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030910 4860 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030922 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030959 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030978 4860 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.030990 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031005 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031022 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031037 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031053 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031070 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031087 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031100 4860 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031114 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031127 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031140 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031152 4860 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031166 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031193 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031206 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031219 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031238 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031250 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031264 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031277 4860 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031290 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031303 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031314 4860 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031327 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031338 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031351 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031369 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031381 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031394 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031405 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031418 4860 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031430 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031443 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031455 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031468 4860 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031481 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031496 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031508 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031521 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031532 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031545 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031559 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031571 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031584 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031597 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031610 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.031622 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.034223 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.034483 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.034635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.035000 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.035013 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.035127 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:51.535086689 +0000 UTC m=+23.757265209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.035148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.035556 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.035857 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.035883 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.036197 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.036347 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.036727 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.037563 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.038425 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.038495 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.038504 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:51.538481571 +0000 UTC m=+23.760660061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.038739 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.039896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.052763 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.053071 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.055825 4860 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.058753 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.058960 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.059391 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.059449 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.059479 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.059588 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:51.559548332 +0000 UTC m=+23.781726982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.060158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.060783 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.062039 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.062153 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.064197 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.064577 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.067205 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.069406 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.069463 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.069494 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.069611 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:51.569572642 +0000 UTC m=+23.791751302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.079153 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.081083 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.081305 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.081776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.088409 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.089187 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.089876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.090951 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.091378 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.091813 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.092574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.094000 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.096489 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.098409 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.099097 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.099826 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.100157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.100268 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.100276 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.100596 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.103443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.105225 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.105351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.105657 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.114995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.120473 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.132877 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.132944 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133018 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133032 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133042 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133052 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133062 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133071 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133079 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133088 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133097 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133106 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133115 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133126 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133135 4860 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133144 4860 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133153 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133162 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133171 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133180 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133192 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133201 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133210 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133218 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133227 4860 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133236 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133254 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133263 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133272 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133281 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133292 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133303 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133314 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133324 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133333 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133342 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133351 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133359 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133368 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133377 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133386 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133395 4860 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133456 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.133535 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.192838 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.201308 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.210874 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.261221 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f "/env/_master" ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: source "/env/_master" Jan 21 21:08:51 crc kubenswrapper[4860]: set +o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 21:08:51 crc kubenswrapper[4860]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 21:08:51 crc kubenswrapper[4860]: ho_enable="--enable-hybrid-overlay" Jan 21 21:08:51 crc kubenswrapper[4860]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 21:08:51 crc kubenswrapper[4860]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 21:08:51 crc kubenswrapper[4860]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-host=127.0.0.1 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-port=9743 \ Jan 21 21:08:51 crc kubenswrapper[4860]: ${ho_enable} \ Jan 21 21:08:51 crc kubenswrapper[4860]: --enable-interconnect \ Jan 21 21:08:51 crc kubenswrapper[4860]: --disable-approver \ Jan 21 21:08:51 crc kubenswrapper[4860]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --wait-for-kubernetes-api=200s \ Jan 21 21:08:51 crc kubenswrapper[4860]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --loglevel="${LOGLEVEL}" Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.261398 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.262784 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.267691 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f "/env/_master" ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: source "/env/_master" Jan 21 21:08:51 crc kubenswrapper[4860]: set +o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: Jan 21 21:08:51 crc kubenswrapper[4860]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --disable-webhook \ Jan 21 21:08:51 crc kubenswrapper[4860]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --loglevel="${LOGLEVEL}" Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.269139 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.278451 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.283062 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: source /etc/kubernetes/apiserver-url.env Jan 21 21:08:51 crc kubenswrapper[4860]: else Jan 21 21:08:51 crc kubenswrapper[4860]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 21:08:51 crc kubenswrapper[4860]: exit 1 Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.284346 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.296898 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.314521 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.368554 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.370785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.370848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.370861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.370981 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.386441 4860 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.386438 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.386605 4860 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.386662 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.388097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.388139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.388149 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.388165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.388185 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:51Z","lastTransitionTime":"2026-01-21T21:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.506761 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.506984 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:08:52.506950041 +0000 UTC m=+24.729128511 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.508555 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.518974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.519022 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.519031 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.519053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.519064 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:51Z","lastTransitionTime":"2026-01-21T21:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.537923 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.546135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.546173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.546182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.546199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.546208 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:51Z","lastTransitionTime":"2026-01-21T21:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.562242 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.573408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.573797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.574009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.574147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.574234 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:51Z","lastTransitionTime":"2026-01-21T21:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.677966 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:35:45.983699974 +0000 UTC Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.679278 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.679358 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.679392 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.679438 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679535 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679436 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679603 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:52.679586767 +0000 UTC m=+24.901765237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679865 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679897 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679913 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679963 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679988 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:52.679971819 +0000 UTC m=+24.902150439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.679990 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.680026 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.680034 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.680054 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:52.680046451 +0000 UTC m=+24.902225131 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.680099 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:52.680089062 +0000 UTC m=+24.902267732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.705845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.705897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.705909 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.705925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.705956 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:51Z","lastTransitionTime":"2026-01-21T21:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.858318 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b382d3d8e90de1fc831268b582e2ea97009d192a3aeeec480203b1faa96b3599"} Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.861491 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f "/env/_master" ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: source "/env/_master" Jan 21 21:08:51 crc kubenswrapper[4860]: set +o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 21:08:51 crc kubenswrapper[4860]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 21:08:51 crc kubenswrapper[4860]: ho_enable="--enable-hybrid-overlay" Jan 21 21:08:51 crc kubenswrapper[4860]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 21:08:51 crc kubenswrapper[4860]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 21:08:51 crc kubenswrapper[4860]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-host=127.0.0.1 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --webhook-port=9743 \ Jan 21 21:08:51 crc kubenswrapper[4860]: ${ho_enable} \ Jan 21 21:08:51 crc kubenswrapper[4860]: --enable-interconnect \ Jan 21 21:08:51 crc kubenswrapper[4860]: --disable-approver \ Jan 21 21:08:51 crc kubenswrapper[4860]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --wait-for-kubernetes-api=200s \ Jan 21 21:08:51 crc kubenswrapper[4860]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --loglevel="${LOGLEVEL}" Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.861889 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.867185 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f "/env/_master" ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: source "/env/_master" Jan 21 21:08:51 crc kubenswrapper[4860]: set +o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: Jan 21 21:08:51 crc kubenswrapper[4860]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 21:08:51 crc kubenswrapper[4860]: --disable-webhook \ Jan 21 21:08:51 crc kubenswrapper[4860]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 21:08:51 crc kubenswrapper[4860]: --loglevel="${LOGLEVEL}" Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.868343 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.870794 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae" exitCode=255 Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.870878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae"} Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.872168 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"78f06ee1c6703a841692ec22bc5768e84ff09d8367bb844febf57b3553141e43"} Jan 21 21:08:51 crc kubenswrapper[4860]: I0121 21:08:51.873073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f30ee67b79b40140c05aec52c11df16a8004b83563d16f840fa1354417905233"} Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.874040 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.878096 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.878353 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:51 crc kubenswrapper[4860]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 21 21:08:51 crc kubenswrapper[4860]: set -o allexport Jan 21 21:08:51 crc kubenswrapper[4860]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 21:08:51 crc kubenswrapper[4860]: source /etc/kubernetes/apiserver-url.env Jan 21 21:08:51 crc kubenswrapper[4860]: else Jan 21 21:08:51 crc kubenswrapper[4860]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 21:08:51 crc kubenswrapper[4860]: exit 1 Jan 21 21:08:51 crc kubenswrapper[4860]: fi Jan 21 21:08:51 crc kubenswrapper[4860]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 21:08:51 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:51 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:51 crc kubenswrapper[4860]: E0121 21:08:51.879535 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.148512 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.148687 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.160239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.160339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.160361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.160395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.160411 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.218880 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.223400 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.254373 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.262339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.262377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.262388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.262404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.262416 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.473713 4860 scope.go:117] "RemoveContainer" containerID="9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.474306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.474343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.474356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.474378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.474388 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.564760 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.570192 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.570424 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:08:54.570392516 +0000 UTC m=+26.792570986 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.570844 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6n8b5"] Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.571277 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.571877 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ccxw8"] Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.572122 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.577121 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.577472 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.577806 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.578703 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.578879 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.579004 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.579083 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.579207 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.579279 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.581777 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.582987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.583008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.583018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.583034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.583100 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.586631 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.588006 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.591866 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.592745 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.594985 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.595574 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.597290 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.598409 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.603421 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.604336 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.606305 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.609029 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.611136 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.616271 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.616989 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.617798 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.620394 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.621332 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.623086 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.623855 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.624690 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.628421 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.629957 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.630693 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.631380 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.632570 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.633420 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.634637 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.635428 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.638987 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.639663 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.640918 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.641583 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.642148 4860 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.646731 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.649085 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.649704 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.651038 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.665303 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.665270 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.666204 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.667303 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.668264 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.669740 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.670313 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.671523 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672382 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672635 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95f1feb1-156a-4494-a3c9-30581a4bf19a-host\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672770 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/99d522d6-a954-4073-86aa-4c869d61585f-hosts-file\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672807 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgr8n\" (UniqueName: \"kubernetes.io/projected/95f1feb1-156a-4494-a3c9-30581a4bf19a-kube-api-access-rgr8n\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672845 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qw7m\" (UniqueName: \"kubernetes.io/projected/99d522d6-a954-4073-86aa-4c869d61585f-kube-api-access-4qw7m\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.672868 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/95f1feb1-156a-4494-a3c9-30581a4bf19a-serviceca\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.673754 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.674490 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.675772 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.676585 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.678049 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.678665 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.679808 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.680441 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.681104 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.682338 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.682985 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.684092 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:16:44.283770813 +0000 UTC Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780527 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780613 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780647 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780677 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/99d522d6-a954-4073-86aa-4c869d61585f-hosts-file\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780732 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qw7m\" (UniqueName: \"kubernetes.io/projected/99d522d6-a954-4073-86aa-4c869d61585f-kube-api-access-4qw7m\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780759 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/95f1feb1-156a-4494-a3c9-30581a4bf19a-serviceca\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780785 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgr8n\" (UniqueName: \"kubernetes.io/projected/95f1feb1-156a-4494-a3c9-30581a4bf19a-kube-api-access-rgr8n\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95f1feb1-156a-4494-a3c9-30581a4bf19a-host\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.780917 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95f1feb1-156a-4494-a3c9-30581a4bf19a-host\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781154 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781188 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781205 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781275 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:54.781249536 +0000 UTC m=+27.003428006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781354 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781369 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781378 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781462 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:54.781452543 +0000 UTC m=+27.003631023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781533 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781562 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:54.781553185 +0000 UTC m=+27.003731655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.781624 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/99d522d6-a954-4073-86aa-4c869d61585f-hosts-file\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781697 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.781753 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:54.78172289 +0000 UTC m=+27.003901370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.783819 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/95f1feb1-156a-4494-a3c9-30581a4bf19a-serviceca\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.803412 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.807307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.807369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.807381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.807411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.807426 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.885946 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgr8n\" (UniqueName: \"kubernetes.io/projected/95f1feb1-156a-4494-a3c9-30581a4bf19a-kube-api-access-rgr8n\") pod \"node-ca-ccxw8\" (UID: \"95f1feb1-156a-4494-a3c9-30581a4bf19a\") " pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.886826 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qw7m\" (UniqueName: \"kubernetes.io/projected/99d522d6-a954-4073-86aa-4c869d61585f-kube-api-access-4qw7m\") pod \"node-resolver-6n8b5\" (UID: \"99d522d6-a954-4073-86aa-4c869d61585f\") " pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.894621 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6n8b5" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.906166 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.918541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.918596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.918648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.918671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.918683 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:52Z","lastTransitionTime":"2026-01-21T21:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.922226 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ccxw8" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.923509 4860 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.933370 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: W0121 21:08:52.936279 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99d522d6_a954_4073_86aa_4c869d61585f.slice/crio-ebbf2af05eadefaf1e1d6dc63fac74a3c5929d5397695abf3ed1785bd385936e WatchSource:0}: Error finding container ebbf2af05eadefaf1e1d6dc63fac74a3c5929d5397695abf3ed1785bd385936e: Status 404 returned error can't find the container with id ebbf2af05eadefaf1e1d6dc63fac74a3c5929d5397695abf3ed1785bd385936e Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.944203 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:52 crc kubenswrapper[4860]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 21 21:08:52 crc kubenswrapper[4860]: set -uo pipefail Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 21 21:08:52 crc kubenswrapper[4860]: HOSTS_FILE="/etc/hosts" Jan 21 21:08:52 crc kubenswrapper[4860]: TEMP_FILE="/etc/hosts.tmp" Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: # Make a temporary file with the old hosts file's attributes. Jan 21 21:08:52 crc kubenswrapper[4860]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 21 21:08:52 crc kubenswrapper[4860]: echo "Failed to preserve hosts file. Exiting." Jan 21 21:08:52 crc kubenswrapper[4860]: exit 1 Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: while true; do Jan 21 21:08:52 crc kubenswrapper[4860]: declare -A svc_ips Jan 21 21:08:52 crc kubenswrapper[4860]: for svc in "${services[@]}"; do Jan 21 21:08:52 crc kubenswrapper[4860]: # Fetch service IP from cluster dns if present. We make several tries Jan 21 21:08:52 crc kubenswrapper[4860]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 21 21:08:52 crc kubenswrapper[4860]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 21 21:08:52 crc kubenswrapper[4860]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 21 21:08:52 crc kubenswrapper[4860]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 21:08:52 crc kubenswrapper[4860]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 21:08:52 crc kubenswrapper[4860]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 21:08:52 crc kubenswrapper[4860]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 21 21:08:52 crc kubenswrapper[4860]: for i in ${!cmds[*]} Jan 21 21:08:52 crc kubenswrapper[4860]: do Jan 21 21:08:52 crc kubenswrapper[4860]: ips=($(eval "${cmds[i]}")) Jan 21 21:08:52 crc kubenswrapper[4860]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 21 21:08:52 crc kubenswrapper[4860]: svc_ips["${svc}"]="${ips[@]}" Jan 21 21:08:52 crc kubenswrapper[4860]: break Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: # Update /etc/hosts only if we get valid service IPs Jan 21 21:08:52 crc kubenswrapper[4860]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 21 21:08:52 crc kubenswrapper[4860]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 21 21:08:52 crc kubenswrapper[4860]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 21 21:08:52 crc kubenswrapper[4860]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 21 21:08:52 crc kubenswrapper[4860]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 21 21:08:52 crc kubenswrapper[4860]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 21 21:08:52 crc kubenswrapper[4860]: sleep 60 & wait Jan 21 21:08:52 crc kubenswrapper[4860]: continue Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: # Append resolver entries for services Jan 21 21:08:52 crc kubenswrapper[4860]: rc=0 Jan 21 21:08:52 crc kubenswrapper[4860]: for svc in "${!svc_ips[@]}"; do Jan 21 21:08:52 crc kubenswrapper[4860]: for ip in ${svc_ips[${svc}]}; do Jan 21 21:08:52 crc kubenswrapper[4860]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: if [[ $rc -ne 0 ]]; then Jan 21 21:08:52 crc kubenswrapper[4860]: sleep 60 & wait Jan 21 21:08:52 crc kubenswrapper[4860]: continue Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: Jan 21 21:08:52 crc kubenswrapper[4860]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 21 21:08:52 crc kubenswrapper[4860]: # Replace /etc/hosts with our modified version if needed Jan 21 21:08:52 crc kubenswrapper[4860]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 21 21:08:52 crc kubenswrapper[4860]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: sleep 60 & wait Jan 21 21:08:52 crc kubenswrapper[4860]: unset svc_ips Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qw7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-6n8b5_openshift-dns(99d522d6-a954-4073-86aa-4c869d61585f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:52 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.945776 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-6n8b5" podUID="99d522d6-a954-4073-86aa-4c869d61585f" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.953964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.958144 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:52 crc kubenswrapper[4860]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 21 21:08:52 crc kubenswrapper[4860]: while [ true ]; Jan 21 21:08:52 crc kubenswrapper[4860]: do Jan 21 21:08:52 crc kubenswrapper[4860]: for f in $(ls /tmp/serviceca); do Jan 21 21:08:52 crc kubenswrapper[4860]: echo $f Jan 21 21:08:52 crc kubenswrapper[4860]: ca_file_path="/tmp/serviceca/${f}" Jan 21 21:08:52 crc kubenswrapper[4860]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 21 21:08:52 crc kubenswrapper[4860]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 21 21:08:52 crc kubenswrapper[4860]: if [ -e "${reg_dir_path}" ]; then Jan 21 21:08:52 crc kubenswrapper[4860]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 21 21:08:52 crc kubenswrapper[4860]: else Jan 21 21:08:52 crc kubenswrapper[4860]: mkdir $reg_dir_path Jan 21 21:08:52 crc kubenswrapper[4860]: cp $ca_file_path $reg_dir_path/ca.crt Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: for d in $(ls /etc/docker/certs.d); do Jan 21 21:08:52 crc kubenswrapper[4860]: echo $d Jan 21 21:08:52 crc kubenswrapper[4860]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 21 21:08:52 crc kubenswrapper[4860]: reg_conf_path="/tmp/serviceca/${dp}" Jan 21 21:08:52 crc kubenswrapper[4860]: if [ ! -e "${reg_conf_path}" ]; then Jan 21 21:08:52 crc kubenswrapper[4860]: rm -rf /etc/docker/certs.d/$d Jan 21 21:08:52 crc kubenswrapper[4860]: fi Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: sleep 60 & wait ${!} Jan 21 21:08:52 crc kubenswrapper[4860]: done Jan 21 21:08:52 crc kubenswrapper[4860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgr8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-ccxw8_openshift-image-registry(95f1feb1-156a-4494-a3c9-30581a4bf19a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:52 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:52 crc kubenswrapper[4860]: E0121 21:08:52.962283 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-ccxw8" podUID="95f1feb1-156a-4494-a3c9-30581a4bf19a" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.969728 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.983571 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:52 crc kubenswrapper[4860]: I0121 21:08:52.996074 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.015235 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.021057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.021106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.021117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.021133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.021146 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.042362 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.057265 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.083971 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.153146 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.153206 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.153224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.153275 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.153295 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.170575 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.256287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.256329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.256342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.256361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.256372 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.331135 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.366009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.366068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.366084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.366106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.366119 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.385648 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-s67xh"] Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.386245 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.387532 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-w47lx"] Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.387815 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.396077 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.396408 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.396953 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.403841 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.405239 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.405927 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.406182 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.408664 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.408817 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.416781 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.420008 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.443055 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495042 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-os-release\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495083 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmhb\" (UniqueName: \"kubernetes.io/projected/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-kube-api-access-hjmhb\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495130 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-bin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-daemon-config\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495180 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb59cca-ede6-44c6-850b-28d109e50dea-mcd-auth-proxy-config\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495198 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-multus\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-hostroot\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495229 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-etc-kubernetes\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495246 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cni-binary-copy\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495262 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-conf-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495290 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8lx\" (UniqueName: \"kubernetes.io/projected/ebb59cca-ede6-44c6-850b-28d109e50dea-kube-api-access-qb8lx\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-socket-dir-parent\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495330 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb59cca-ede6-44c6-850b-28d109e50dea-proxy-tls\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495353 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495375 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-netns\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495397 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-system-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495438 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-kubelet\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495456 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cnibin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495476 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-k8s-cni-cncf-io\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495492 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-multus-certs\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495510 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb59cca-ede6-44c6-850b-28d109e50dea-rootfs\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.495609 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.496071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.496101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.496113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.496129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.496142 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-kubelet\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630401 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cnibin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630444 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-k8s-cni-cncf-io\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630521 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-multus-certs\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630551 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630793 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb59cca-ede6-44c6-850b-28d109e50dea-rootfs\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.630845 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630955 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-kubelet\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631106 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cnibin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631155 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-k8s-cni-cncf-io\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630474 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.630549 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb59cca-ede6-44c6-850b-28d109e50dea-rootfs\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-os-release\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631353 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjmhb\" (UniqueName: \"kubernetes.io/projected/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-kube-api-access-hjmhb\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631432 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-bin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631476 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-daemon-config\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631520 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-multus-certs\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631518 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb59cca-ede6-44c6-850b-28d109e50dea-mcd-auth-proxy-config\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-multus\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631678 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-hostroot\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631737 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-etc-kubernetes\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631762 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cni-binary-copy\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-conf-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631835 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8lx\" (UniqueName: \"kubernetes.io/projected/ebb59cca-ede6-44c6-850b-28d109e50dea-kube-api-access-qb8lx\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631867 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-socket-dir-parent\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631891 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb59cca-ede6-44c6-850b-28d109e50dea-proxy-tls\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631920 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631963 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-os-release\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.631412 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632009 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-bin\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632034 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-conf-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.631971 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-netns\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632061 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-var-lib-cni-multus\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632084 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-system-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632087 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-hostroot\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632126 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-etc-kubernetes\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632399 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-socket-dir-parent\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632873 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb59cca-ede6-44c6-850b-28d109e50dea-mcd-auth-proxy-config\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.632991 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-host-run-netns\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.633065 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.633156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-system-cni-dir\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.633159 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-cni-binary-copy\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.633445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-multus-daemon-config\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.637211 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb59cca-ede6-44c6-850b-28d109e50dea-proxy-tls\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.638342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.638418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.638432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.638462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.638491 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.648642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.669159 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8lx\" (UniqueName: \"kubernetes.io/projected/ebb59cca-ede6-44c6-850b-28d109e50dea-kube-api-access-qb8lx\") pod \"machine-config-daemon-w47lx\" (UID: \"ebb59cca-ede6-44c6-850b-28d109e50dea\") " pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.675459 4860 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.678719 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjmhb\" (UniqueName: \"kubernetes.io/projected/e2a7ca69-9cb5-41b5-9213-72165a9fc8e1-kube-api-access-hjmhb\") pod \"multus-s67xh\" (UID: \"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\") " pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.681276 4860 csr.go:261] certificate signing request csr-9rml8 is approved, waiting to be issued Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.684515 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:46:26.034198732 +0000 UTC Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.685959 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.691957 4860 csr.go:257] certificate signing request csr-9rml8 is issued Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.700628 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.705348 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s67xh" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.712391 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.713917 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: W0121 21:08:53.735871 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb59cca_ede6_44c6_850b_28d109e50dea.slice/crio-4c414841177aa087dd6b84bf023f10a26fb1a72f3befdad68c38ddb6e8ed3ed9 WatchSource:0}: Error finding container 4c414841177aa087dd6b84bf023f10a26fb1a72f3befdad68c38ddb6e8ed3ed9: Status 404 returned error can't find the container with id 4c414841177aa087dd6b84bf023f10a26fb1a72f3befdad68c38ddb6e8ed3ed9 Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.736611 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.737550 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 21:08:53 crc kubenswrapper[4860]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 21 21:08:53 crc kubenswrapper[4860]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 21 21:08:53 crc kubenswrapper[4860]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-s67xh_openshift-multus(e2a7ca69-9cb5-41b5-9213-72165a9fc8e1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 21:08:53 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.738784 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-s67xh" podUID="e2a7ca69-9cb5-41b5-9213-72165a9fc8e1" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.744222 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb8lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.745317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.745340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.745348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.745365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.745374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.746675 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb8lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 21:08:53 crc kubenswrapper[4860]: E0121 21:08:53.747913 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.752794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.780031 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.787344 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.792319 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.806371 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.822371 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.837333 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.847915 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.848823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.848886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.848898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.848954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.848968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:53Z","lastTransitionTime":"2026-01-21T21:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.918117 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"4c414841177aa087dd6b84bf023f10a26fb1a72f3befdad68c38ddb6e8ed3ed9"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.919848 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ccxw8" event={"ID":"95f1feb1-156a-4494-a3c9-30581a4bf19a","Type":"ContainerStarted","Data":"115b5b79c2fe3297b23f62725e638e2202d48f907f76c5d80907099ef1e8373a"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.931724 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.934067 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.934487 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.937547 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerStarted","Data":"154fe122e3e0f22711f5312606d53b8729d957222411169546b7e351adff7367"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.939492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6n8b5" event={"ID":"99d522d6-a954-4073-86aa-4c869d61585f","Type":"ContainerStarted","Data":"ebbf2af05eadefaf1e1d6dc63fac74a3c5929d5397695abf3ed1785bd385936e"} Jan 21 21:08:53 crc kubenswrapper[4860]: I0121 21:08:53.942246 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.012696 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-77hw7"] Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.019188 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzw2c"] Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.021258 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.029986 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.036629 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.036958 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037132 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037158 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037310 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037563 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037756 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.037960 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-os-release\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038542 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:54Z","lastTransitionTime":"2026-01-21T21:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.040309 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.038018 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.051833 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.051895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052061 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052080 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-binary-copy\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tb7z\" (UniqueName: \"kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052652 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052735 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052764 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052791 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052853 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052953 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.052987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tmd\" (UniqueName: \"kubernetes.io/projected/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-kube-api-access-29tmd\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053195 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053290 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053346 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cnibin\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053376 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053432 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-system-cni-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053488 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.053895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.054161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.098211 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.146969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.147062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.147087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.147134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.147165 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:54Z","lastTransitionTime":"2026-01-21T21:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156291 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156409 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156454 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156489 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156543 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-os-release\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156626 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156665 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156703 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tb7z\" (UniqueName: \"kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156740 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-binary-copy\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156773 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156818 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156865 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156952 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.156988 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157009 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157031 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157049 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tmd\" (UniqueName: \"kubernetes.io/projected/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-kube-api-access-29tmd\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157070 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157086 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157117 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cnibin\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-system-cni-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157518 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-system-cni-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.157603 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.158583 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.158685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.158723 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.158756 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.159439 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.159534 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.159589 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.159693 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.159742 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.160780 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.160861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.160898 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.161578 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.162278 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.162413 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.162807 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.162976 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.163052 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.163104 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cnibin\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.163155 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.163733 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-os-release\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.164116 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-cni-binary-copy\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.187076 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.208673 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.240664 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tmd\" (UniqueName: \"kubernetes.io/projected/9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04-kube-api-access-29tmd\") pod \"multus-additional-cni-plugins-77hw7\" (UID: \"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\") " pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.242559 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.245261 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tb7z\" (UniqueName: \"kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z\") pod \"ovnkube-node-pzw2c\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.250502 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.250571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.250586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.250613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.250632 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:54Z","lastTransitionTime":"2026-01-21T21:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.254804 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.268466 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.283376 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.301377 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.314878 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.328675 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.390696 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.390788 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-77hw7" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.392616 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.648318 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.648393 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.648632 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.648766 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:08:58.648714763 +0000 UTC m=+30.870893233 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.649441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.649473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.649481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.649497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.649511 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:54Z","lastTransitionTime":"2026-01-21T21:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:54 crc kubenswrapper[4860]: W0121 21:08:54.672199 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a9e9fa6_0fb9_47bf_a3a6_ab04dc59ce04.slice/crio-f994b2acaca186712b6ef33f91f257a5fc74c8092651c9410c9c12aab2f73815 WatchSource:0}: Error finding container f994b2acaca186712b6ef33f91f257a5fc74c8092651c9410c9c12aab2f73815: Status 404 returned error can't find the container with id f994b2acaca186712b6ef33f91f257a5fc74c8092651c9410c9c12aab2f73815 Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.676240 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.686015 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:53:03.769889693 +0000 UTC Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.692889 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 21:03:53 +0000 UTC, rotation deadline is 2026-11-09 10:14:36.027843194 +0000 UTC Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.692988 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6997h5m41.334858456s for next certificate rotation Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.708652 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.718834 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.732294 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.751313 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.763589 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.780271 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.790873 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.851123 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.851166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.851211 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.851246 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851385 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851428 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:58.851413509 +0000 UTC m=+31.073591979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851587 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851671 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:58.851651376 +0000 UTC m=+31.073829846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851677 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851724 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851748 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851771 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851784 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851799 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851827 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:58.851799851 +0000 UTC m=+31.073978321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:54 crc kubenswrapper[4860]: E0121 21:08:54.851870 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:08:58.851853282 +0000 UTC m=+31.074031752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.854843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.854929 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.855069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.855105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.855122 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:54Z","lastTransitionTime":"2026-01-21T21:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.859465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.943554 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ccxw8" event={"ID":"95f1feb1-156a-4494-a3c9-30581a4bf19a","Type":"ContainerStarted","Data":"c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.946901 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerStarted","Data":"f994b2acaca186712b6ef33f91f257a5fc74c8092651c9410c9c12aab2f73815"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.950105 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.950156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.952407 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerStarted","Data":"0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.953904 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6n8b5" event={"ID":"99d522d6-a954-4073-86aa-4c869d61585f","Type":"ContainerStarted","Data":"b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12"} Jan 21 21:08:54 crc kubenswrapper[4860]: I0121 21:08:54.955990 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"3069b2106995569c530b9d4edeaba0910294dd0b467c5c90b178a6a8a7783873"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.009107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.009152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.009164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.009181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.009194 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.056469 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.138005 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.138052 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.138062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.138082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.138093 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.158342 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.218701 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.247623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.247701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.247717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.247741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.247756 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.635856 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:55 crc kubenswrapper[4860]: E0121 21:08:55.636094 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.635892 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:55 crc kubenswrapper[4860]: E0121 21:08:55.636227 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.641236 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.644490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.644530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.644544 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.644563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.644575 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.654191 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.671409 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.685837 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.686148 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:05:59.834579688 +0000 UTC Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.699024 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.713164 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.727237 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.744405 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.747608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.747668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.747681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.747702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.747719 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.756160 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.772421 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.787607 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.800728 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.812713 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.825794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.839135 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.851309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.851376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.851403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.851427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.851447 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.852649 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.880047 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.954023 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.954467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.954583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.954678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.954756 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:55Z","lastTransitionTime":"2026-01-21T21:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.960506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerStarted","Data":"d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.962853 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f" exitCode=0 Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.962903 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f"} Jan 21 21:08:55 crc kubenswrapper[4860]: I0121 21:08:55.972916 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.011211 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.023834 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.036437 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.049552 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.057736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.058012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.058117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.058270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.058371 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.060872 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.074338 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.088795 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.161989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.247475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.247499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.247524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.247538 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.351241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.351300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.351312 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.351338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.351356 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.480817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.480854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.480865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.480889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.480902 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.508012 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.580663 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:56 crc kubenswrapper[4860]: E0121 21:08:56.580872 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.585091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.585132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.585143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.585164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.585178 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.735428 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:57:09.186199283 +0000 UTC Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.739252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.739303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.739314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.739332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:56 crc kubenswrapper[4860]: I0121 21:08:56.739343 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:56.880331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:56.880371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:56.880381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:56.880398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:56.880407 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:56Z","lastTransitionTime":"2026-01-21T21:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.055434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.055509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.055529 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.055553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.055573 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.097438 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.110222 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.129709 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.143323 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.159672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.159826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.159843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.159871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.159885 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.166383 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.183843 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.205567 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.217551 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.230396 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.242714 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.254954 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.263654 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.264019 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.264128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.264258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.264352 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.279086 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.291316 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.303300 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.315732 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.324752 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.350209 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.364904 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.368419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.368570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.368674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.368781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.368915 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.375808 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.473158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.473870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.473886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.473918 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.473955 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.578197 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.578243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:57 crc kubenswrapper[4860]: E0121 21:08:57.578443 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:57 crc kubenswrapper[4860]: E0121 21:08:57.578603 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.578844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.579241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.579264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.579284 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.579305 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.683093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.683136 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.683146 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.683165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.683176 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.736279 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:25:56.542128306 +0000 UTC Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.786951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.787000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.787012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.787031 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.787044 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.890831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.890885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.890899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.890923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.890962 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.996097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.996173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.996184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.996217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:57 crc kubenswrapper[4860]: I0121 21:08:57.996262 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:57Z","lastTransitionTime":"2026-01-21T21:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.067640 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a" exitCode=0 Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.067747 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078372 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078382 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078390 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.078398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.086744 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.099155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.099198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.099207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.099223 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.099232 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.105760 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.121809 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.134715 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.148165 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.157474 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.174074 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.186585 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.196470 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.202822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.202884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.202895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.202914 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.202926 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.216619 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.235218 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.251612 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.265987 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.286553 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.310748 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.310799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.310809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.310831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.310842 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.414122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.423392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.423463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.423497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.423529 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.431231 4860 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.530964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.531031 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.531044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.531066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.531085 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.578883 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.579314 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.598743 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.611376 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.625715 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.635036 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.635078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.635090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.635111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.635126 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.645405 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.656577 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.672797 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.673027 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:09:06.672986495 +0000 UTC m=+38.895164965 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.673482 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.689078 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.703479 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.727178 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.736470 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:58:11.687945092 +0000 UTC Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.738408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.738451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.738464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.738484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.738497 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.743319 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.759178 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.772263 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.784150 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.799879 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.842613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.842666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.842679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.842699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.842714 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.877966 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.878113 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.878166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878214 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878330 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878371 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:06.87832577 +0000 UTC m=+39.100504250 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.878224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878406 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878415 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878501 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878524 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878411 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:06.878393572 +0000 UTC m=+39.100572042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878454 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878607 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878623 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:06.878585988 +0000 UTC m=+39.100764478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:58 crc kubenswrapper[4860]: E0121 21:08:58.878652 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:06.87864035 +0000 UTC m=+39.100818830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.948846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.948951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.948977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.949009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:58 crc kubenswrapper[4860]: I0121 21:08:58.949065 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:58Z","lastTransitionTime":"2026-01-21T21:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.052590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.052632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.052643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.052661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.052675 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.085959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerStarted","Data":"9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.103406 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.120857 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.139976 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.156985 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.159723 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.159866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.159902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.160001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.160033 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.176749 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.199361 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.213840 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.232317 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.246184 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.261539 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.264590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.264727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.264742 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.264771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.264783 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.279234 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.293755 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.312481 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.342670 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.368358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.368447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.368461 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.368515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.368541 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.471717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.471755 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.471774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.471795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.471805 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.575554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.575629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.575646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.575673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.575693 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.578752 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.578791 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:08:59 crc kubenswrapper[4860]: E0121 21:08:59.578953 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:08:59 crc kubenswrapper[4860]: E0121 21:08:59.579076 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.679323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.679423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.679447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.679483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.679508 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.737057 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:28:14.663162067 +0000 UTC Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.783272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.783336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.783351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.783379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.783402 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.886806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.886920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.886976 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.887034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.887058 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.992574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.992631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.992657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.992691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:08:59 crc kubenswrapper[4860]: I0121 21:08:59.992703 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:08:59Z","lastTransitionTime":"2026-01-21T21:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.097110 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.097380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.097405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.097438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.097465 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.201737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.201797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.201809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.201829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.201842 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.305435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.305591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.305618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.305691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.305718 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.408322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.408400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.408415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.408437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.408450 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.512020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.512074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.512085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.512105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.512118 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.577915 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:00 crc kubenswrapper[4860]: E0121 21:09:00.578159 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.615540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.615625 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.615656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.615690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.615715 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.717812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.717868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.717881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.717898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.717909 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:00Z","lastTransitionTime":"2026-01-21T21:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:00 crc kubenswrapper[4860]: I0121 21:09:00.738227 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:30:27.724572262 +0000 UTC Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.026182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.026357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.026373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.026399 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.026412 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.097219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.099326 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382" exitCode=0 Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.099400 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.130434 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.132491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.132533 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.132544 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.132565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.132578 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.145137 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.164704 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.177964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.190522 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.202681 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.215157 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.224341 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.235780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.235827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.235844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.235867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.235879 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.244732 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.255768 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.267821 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.283147 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.296137 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.309103 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.341075 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.341141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.341161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.341186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.341200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.444348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.444382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.444389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.444405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.444415 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.547689 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.547754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.547777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.547802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.547844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.578801 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.578904 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:01 crc kubenswrapper[4860]: E0121 21:09:01.579026 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:01 crc kubenswrapper[4860]: E0121 21:09:01.579198 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.650587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.650620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.650630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.650645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.650654 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.738556 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:58:13.148728411 +0000 UTC Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.753663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.753716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.753727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.753743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.753752 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.863722 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.863831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.863847 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.863874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.863890 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.966208 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.966253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.966267 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.966288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:01 crc kubenswrapper[4860]: I0121 21:09:01.966303 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:01Z","lastTransitionTime":"2026-01-21T21:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.069368 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.069429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.069449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.069491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.069505 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.105016 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd" exitCode=0 Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.105069 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.122624 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.154465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.167641 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.172569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.172597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.172606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.172621 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.172631 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.181751 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.193916 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.204086 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.220261 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.232963 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.251114 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.262643 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.274320 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.276061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.276128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.276147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.276173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.276187 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.287789 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.299545 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.311328 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.376410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.376525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.376539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.376567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.376581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.396466 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.402213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.402287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.402309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.402340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.402365 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.414003 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.418904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.418970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.418985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.419012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.419027 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.430211 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.435360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.435456 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.435470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.435493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.435508 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.447099 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.452059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.452111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.452127 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.452148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.452160 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.465877 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.466121 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.469377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.469437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.469451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.469483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.469507 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.573008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.573106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.573124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.573155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.573175 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.578346 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:02 crc kubenswrapper[4860]: E0121 21:09:02.578587 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.676772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.676861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.676900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.676993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.677030 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.739103 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:06:48.313982081 +0000 UTC Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.780045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.780514 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.780655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.780751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.780835 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.884003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.884044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.884055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.884071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.884081 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.986990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.987030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.987041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.987057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:02 crc kubenswrapper[4860]: I0121 21:09:02.987068 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:02Z","lastTransitionTime":"2026-01-21T21:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.090313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.090356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.090372 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.090391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.090404 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.112517 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722" exitCode=0 Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.112622 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.126206 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.151829 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.167341 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.185468 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.193567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.193620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.193632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.193655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.193673 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.200959 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.214619 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.229080 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.239587 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.255762 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.273706 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.285295 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.297724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.297786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.297799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.297816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.297826 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.298638 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.309665 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.320620 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.403009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.403055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.403067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.403084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.403099 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.506666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.506717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.506731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.506758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.506771 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.578906 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.579357 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:03 crc kubenswrapper[4860]: E0121 21:09:03.579440 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:03 crc kubenswrapper[4860]: E0121 21:09:03.579698 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.807796 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:57:43.824470515 +0000 UTC Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.811424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.811950 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.811970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.811988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.812008 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.914153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.914187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.914196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.914232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:03 crc kubenswrapper[4860]: I0121 21:09:03.914242 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:03Z","lastTransitionTime":"2026-01-21T21:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.017409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.017484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.017499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.017520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.017532 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.120435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.120481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.120490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.120508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.120519 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.121968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerStarted","Data":"f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.127787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.128210 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.134794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.160555 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.244678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.244731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.244746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.244795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.244814 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.249528 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.251166 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.264099 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.280598 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.343965 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.352606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.352711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.352735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.352802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.352837 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.360837 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.372148 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.390892 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.394358 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.414003 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.424961 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.517991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.518025 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.518033 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.518052 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.518061 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.523608 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.537566 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.551371 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.561925 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.585574 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.585557 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: E0121 21:09:04.585894 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.595158 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.610644 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.622044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.622101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.622118 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.622158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.622176 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.625657 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.636866 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.652586 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.668341 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.677370 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.695288 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.708676 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.721887 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.739863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.740405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.740359 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.740422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.740646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.740666 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.757744 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.774016 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.808684 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:48:26.087004061 +0000 UTC Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845640 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.845920 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.858964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.869342 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.889303 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.901565 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.913834 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.926040 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.935695 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.946521 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.948255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.948293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.948306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.948329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.948344 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:04Z","lastTransitionTime":"2026-01-21T21:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.960869 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.974534 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.986522 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:04 crc kubenswrapper[4860]: I0121 21:09:04.996173 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.006225 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.027851 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.051682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.051729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.051743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.051761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.051773 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.142786 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.160018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.160060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.160071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.160091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.160102 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.262833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.262862 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.262873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.262893 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.262906 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.367589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.367697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.367718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.367752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.367779 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.470601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.470639 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.470649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.470666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.470678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.573098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.573425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.573449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.573487 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.573507 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.578451 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.578608 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:05 crc kubenswrapper[4860]: E0121 21:09:05.578734 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:05 crc kubenswrapper[4860]: E0121 21:09:05.579310 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.677352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.677388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.677400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.677417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.677429 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.782515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.782579 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.782595 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.782627 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.782645 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.809505 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:42:46.002883569 +0000 UTC Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.885887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.886007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.886064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.886104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.886126 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.989833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.989913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.989974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.990001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:05 crc kubenswrapper[4860]: I0121 21:09:05.990017 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:05Z","lastTransitionTime":"2026-01-21T21:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.094481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.094532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.094545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.094568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.094581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.147751 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.197992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.198062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.198079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.198104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.198120 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.302797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.302875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.302894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.302921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.302968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.406791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.406882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.406904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.407002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.407049 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.510870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.510911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.510922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.510953 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.510967 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.580417 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.581256 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.614838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.614974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.614996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.615023 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.615059 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.718998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.719048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.719069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.719092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.719109 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.744140 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.744860 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:09:22.744798238 +0000 UTC m=+54.966976708 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.873148 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:08:28.92099254 +0000 UTC Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.879718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.879781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.879794 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.879812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.879825 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.974636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.974723 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.974758 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.974814 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.974949 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975055 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975079 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975141 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975077 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:22.97504548 +0000 UTC m=+55.197223950 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975168 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975220 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:22.975174224 +0000 UTC m=+55.197352694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975233 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975263 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975288 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:22.975244226 +0000 UTC m=+55.197422696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975290 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:06 crc kubenswrapper[4860]: E0121 21:09:06.975348 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:22.975333397 +0000 UTC m=+55.197511867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.983074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.983123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.983133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.983152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:06 crc kubenswrapper[4860]: I0121 21:09:06.983165 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:06Z","lastTransitionTime":"2026-01-21T21:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.086297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.086354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.086372 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.086400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.086416 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.156701 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4" exitCode=0 Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.156769 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.159663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.163239 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.163306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.173348 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.190851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.190975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.190998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.191035 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.191057 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.191081 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.211526 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.224670 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.243152 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.256951 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.295970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.296019 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.296030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.296050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.296061 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.297640 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.315465 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.399089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.399155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.399169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.399192 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.399205 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.502326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.502382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.502394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.502416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.502432 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.578695 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:07 crc kubenswrapper[4860]: E0121 21:09:07.578880 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.579278 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:07 crc kubenswrapper[4860]: E0121 21:09:07.579329 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.590903 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.605258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.605295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.605305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.605320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.605329 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.613380 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.630749 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.648240 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.669897 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.684320 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.698596 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.708455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.708518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.708535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.708558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.708615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.722470 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.743138 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.758313 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.777074 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.793392 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.808831 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.811657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.811704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.811716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.811736 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.811750 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.825988 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.840301 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.857256 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.872030 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.874196 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:09:59.308196856 +0000 UTC Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.888861 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.907827 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.915630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.915676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.915688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.915711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.915729 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:07Z","lastTransitionTime":"2026-01-21T21:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:07 crc kubenswrapper[4860]: I0121 21:09:07.931742 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:07Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.024493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.024558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.024570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.024594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.024606 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.031664 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b"] Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.032410 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.037752 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.038364 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.055688 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.079091 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.098784 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.115350 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.128870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.128954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.128972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.129004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.129016 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.135877 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.154356 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.275768 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.284885 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.284951 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.284973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb31d86f-995f-4262-bd5f-0487bd341607-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.285007 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslzj\" (UniqueName: \"kubernetes.io/projected/fb31d86f-995f-4262-bd5f-0487bd341607-kube-api-access-kslzj\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287792 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.287983 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.288538 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04" containerID="04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073" exitCode=0 Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.288599 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerDied","Data":"04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.311960 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.332269 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.348895 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.368753 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.386770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.386841 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.386883 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.387579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb31d86f-995f-4262-bd5f-0487bd341607-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.387612 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kslzj\" (UniqueName: \"kubernetes.io/projected/fb31d86f-995f-4262-bd5f-0487bd341607-kube-api-access-kslzj\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.388269 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.388273 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb31d86f-995f-4262-bd5f-0487bd341607-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.392958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.392986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.392996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.393018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.393029 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.397452 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb31d86f-995f-4262-bd5f-0487bd341607-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.408157 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.411999 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kslzj\" (UniqueName: \"kubernetes.io/projected/fb31d86f-995f-4262-bd5f-0487bd341607-kube-api-access-kslzj\") pod \"ovnkube-control-plane-749d76644c-p4c4b\" (UID: \"fb31d86f-995f-4262-bd5f-0487bd341607\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.424469 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.457521 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.474675 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.490597 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.498158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.498195 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.498204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.498222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.498232 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.523294 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.580267 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:08 crc kubenswrapper[4860]: E0121 21:09:08.580395 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.874392 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:15:35.079345854 +0000 UTC Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.881093 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.894143 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.896968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.897000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.897011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.897026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.897036 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:08Z","lastTransitionTime":"2026-01-21T21:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:08 crc kubenswrapper[4860]: W0121 21:09:08.910666 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb31d86f_995f_4262_bd5f_0487bd341607.slice/crio-9e962b044219ba14b05960f84b76f011da83e8ee3cd029ce3d95de823364b280 WatchSource:0}: Error finding container 9e962b044219ba14b05960f84b76f011da83e8ee3cd029ce3d95de823364b280: Status 404 returned error can't find the container with id 9e962b044219ba14b05960f84b76f011da83e8ee3cd029ce3d95de823364b280 Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.922729 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:08 crc kubenswrapper[4860]: I0121 21:09:08.953722 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.008391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.008691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.008777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.008883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.009027 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.061626 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.098755 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.112747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.112794 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.112806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.112830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.112844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.117319 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.136431 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.152800 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.166775 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.180303 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.194007 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216036 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.216726 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.229180 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.260410 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.282721 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.295335 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" event={"ID":"fb31d86f-995f-4262-bd5f-0487bd341607","Type":"ContainerStarted","Data":"9e962b044219ba14b05960f84b76f011da83e8ee3cd029ce3d95de823364b280"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.300353 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.316416 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.319811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.319863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.319875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.319892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.319904 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.330331 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.352403 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.368172 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.383997 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.399518 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.416224 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.422814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.422866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.422877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.422896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.422911 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.433016 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.447313 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.460554 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.476916 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:09Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.527004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.527107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.527134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.527172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.527199 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.578435 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:09 crc kubenswrapper[4860]: E0121 21:09:09.578614 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.578707 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:09 crc kubenswrapper[4860]: E0121 21:09:09.578773 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.641209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.641251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.641260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.641285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:09 crc kubenswrapper[4860]: I0121 21:09:09.641298 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:09Z","lastTransitionTime":"2026-01-21T21:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.196129 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:48:33.33670153 +0000 UTC Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.203016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.203061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.203072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.203092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.203103 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.234029 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rrwcr"] Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.234493 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.234564 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.252194 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.273563 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.287266 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.296545 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.296600 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pj2d\" (UniqueName: \"kubernetes.io/projected/60ae05da-3403-4a2f-92f4-2ffa574a65a8-kube-api-access-5pj2d\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.300198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" event={"ID":"fb31d86f-995f-4262-bd5f-0487bd341607","Type":"ContainerStarted","Data":"f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.303065 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" event={"ID":"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04","Type":"ContainerStarted","Data":"7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.304207 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.305090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.305137 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.305155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.305171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.305184 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.321182 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.337913 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.351099 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.370553 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.385121 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.396403 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.397337 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.397422 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pj2d\" (UniqueName: \"kubernetes.io/projected/60ae05da-3403-4a2f-92f4-2ffa574a65a8-kube-api-access-5pj2d\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.397570 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.397673 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:10.897645477 +0000 UTC m=+43.119823937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.407841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.407890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.407907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.407943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.407959 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.411281 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.423444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pj2d\" (UniqueName: \"kubernetes.io/projected/60ae05da-3403-4a2f-92f4-2ffa574a65a8-kube-api-access-5pj2d\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.429552 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.449076 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.462983 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.486430 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.505946 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.510759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.510800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.510811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.510826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.510836 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.525745 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.543663 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.565245 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.577901 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.578113 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.578328 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.594375 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.613819 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.614084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.614155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.614172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.614199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.614215 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.652204 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.670977 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.688679 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.717739 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.717808 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.717830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.717858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.717874 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.739834 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.763912 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.788450 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.814653 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.820844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.820902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.820920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.820962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.821010 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.840371 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.856453 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.877638 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:10Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.905710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.906033 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:10 crc kubenswrapper[4860]: E0121 21:09:10.906239 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:11.906200275 +0000 UTC m=+44.128378775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.924209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.924791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.925026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.925242 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:10 crc kubenswrapper[4860]: I0121 21:09:10.925424 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:10Z","lastTransitionTime":"2026-01-21T21:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.028510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.028951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.028964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.028981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.028993 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.197064 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:42:35.650538137 +0000 UTC Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.261815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.261849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.261863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.261880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.261890 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.310224 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" event={"ID":"fb31d86f-995f-4262-bd5f-0487bd341607","Type":"ContainerStarted","Data":"7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.330348 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.351623 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.363900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.363956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.363972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.363990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.364002 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.367994 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.383039 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.398748 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.414916 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.427697 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.442547 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.464842 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.466995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.467041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.467051 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.467078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.467088 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.482812 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.499116 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.517656 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.534464 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.552670 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.569339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.569397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.569413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.569437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.569452 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.577554 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.578119 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.578183 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:11 crc kubenswrapper[4860]: E0121 21:09:11.578255 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:11 crc kubenswrapper[4860]: E0121 21:09:11.578472 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.594756 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:11Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.673373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.673415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.673423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.673441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.673451 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.777089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.777172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.777199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.777236 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.777263 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.881329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.881406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.881426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.881455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.881475 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.967247 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:11 crc kubenswrapper[4860]: E0121 21:09:11.967598 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:11 crc kubenswrapper[4860]: E0121 21:09:11.967815 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:13.96775414 +0000 UTC m=+46.189932650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.985111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.985165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.985179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.985200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:11 crc kubenswrapper[4860]: I0121 21:09:11.985219 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:11Z","lastTransitionTime":"2026-01-21T21:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.088897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.088965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.088975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.088994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.089006 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.160857 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.161140 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.162578 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.163388 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.163964 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.164006 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.164472 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.164923 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.165242 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.165320 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.192790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.192859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.192872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.192895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.192911 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.198141 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:22:08.98501148 +0000 UTC Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.296458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.296518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.296534 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.296554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.296568 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.317000 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/0.log" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.320194 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" exitCode=1 Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.320296 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.322158 4860 scope.go:117] "RemoveContainer" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.344628 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.363959 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.389102 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.399722 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.399803 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.399821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.399851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.399870 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.404972 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.419977 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.435917 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.452297 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.488461 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:09:11.331671 5987 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 21:09:11.331707 5987 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 21:09:11.331767 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 21:09:11.331787 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 21:09:11.331793 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 21:09:11.331803 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 21:09:11.331825 5987 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 21:09:11.331841 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:09:11.331879 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 21:09:11.331888 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 21:09:11.331864 5987 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 21:09:11.331951 5987 factory.go:656] Stopping watch factory\\\\nI0121 21:09:11.331959 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 21:09:11.331968 5987 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:09:11.331967 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.503516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.503573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.503587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.503615 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.503630 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.504531 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.513172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.513233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.513245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.513277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.513290 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.525503 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.543680 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.546144 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.552681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.552754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.552766 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.552791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.552808 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.567253 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.571706 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577432 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577499 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577807 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.577807 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.578010 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.578115 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.583264 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.592432 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.596674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.596728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.596739 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.596762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.596775 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.598323 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.614214 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.617842 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.619011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.619059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.619071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.619091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.619102 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.630514 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.637094 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:12Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:12 crc kubenswrapper[4860]: E0121 21:09:12.637229 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.639463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.639497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.639507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.639525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.639536 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.743705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.743758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.743769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.743787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.743818 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.847962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.848637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.848651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.848675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.848689 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.953121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.953208 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.953235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.953270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:12 crc kubenswrapper[4860]: I0121 21:09:12.953291 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:12Z","lastTransitionTime":"2026-01-21T21:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.056991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.057074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.057094 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.057128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.057149 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.161246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.161298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.161310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.161335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.161345 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.199188 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:28:23.056930031 +0000 UTC Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.268745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.268800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.268811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.268835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.268852 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.327704 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/0.log" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.330719 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.331420 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.360879 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.372240 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.372297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.372307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.372338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.372354 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.378738 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.400533 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.425471 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.441351 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.461566 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.476014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.476139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.476156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.476205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.476221 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.487477 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.505794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.519329 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.540810 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:09:11.331671 5987 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 21:09:11.331707 5987 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 21:09:11.331767 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 21:09:11.331787 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 21:09:11.331793 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 21:09:11.331803 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 21:09:11.331825 5987 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 21:09:11.331841 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:09:11.331879 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 21:09:11.331888 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 21:09:11.331864 5987 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 21:09:11.331951 5987 factory.go:656] Stopping watch factory\\\\nI0121 21:09:11.331959 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 21:09:11.331968 5987 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:09:11.331967 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.554300 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.573912 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.577908 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.578031 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:13 crc kubenswrapper[4860]: E0121 21:09:13.578475 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:13 crc kubenswrapper[4860]: E0121 21:09:13.578761 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.578993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.579047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.579061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.579087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.579109 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.592844 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.609128 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.624597 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.645709 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:13Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.683063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.683126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.683148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.683183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.683204 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.786730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.786813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.786833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.786871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.786895 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.890546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.890590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.890600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.890616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.890627 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.991486 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:13 crc kubenswrapper[4860]: E0121 21:09:13.991667 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:13 crc kubenswrapper[4860]: E0121 21:09:13.991977 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:17.991752867 +0000 UTC m=+50.213931337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.994105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.994196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.994212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.994233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:13 crc kubenswrapper[4860]: I0121 21:09:13.994247 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:13Z","lastTransitionTime":"2026-01-21T21:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.098353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.098423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.098445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.098472 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.098489 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.199538 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:04:01.120827106 +0000 UTC Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.202028 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.202104 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.202133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.202158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.202172 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.305493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.305544 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.305554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.305571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.305583 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.339123 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/1.log" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.340466 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/0.log" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.344025 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de" exitCode=1 Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.344077 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.344138 4860 scope.go:117] "RemoveContainer" containerID="1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.346389 4860 scope.go:117] "RemoveContainer" containerID="816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de" Jan 21 21:09:14 crc kubenswrapper[4860]: E0121 21:09:14.346730 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.365476 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.527726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.527948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.528026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.528092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.528154 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.528773 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.545460 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.567131 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.578306 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:14 crc kubenswrapper[4860]: E0121 21:09:14.578528 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.578721 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:14 crc kubenswrapper[4860]: E0121 21:09:14.579039 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.585375 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.602794 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.621489 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.631522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.633381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.633414 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.633445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.633465 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.636369 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.651166 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.679033 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b8451498e7958eb931f30d83f26918a2b73d48f5514069263fb4377e00c8070\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:09:11.331671 5987 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 21:09:11.331707 5987 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 21:09:11.331767 5987 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 21:09:11.331787 5987 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 21:09:11.331793 5987 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 21:09:11.331803 5987 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 21:09:11.331825 5987 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 21:09:11.331841 5987 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:09:11.331879 5987 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 21:09:11.331888 5987 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 21:09:11.331864 5987 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 21:09:11.331951 5987 factory.go:656] Stopping watch factory\\\\nI0121 21:09:11.331959 5987 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 21:09:11.331968 5987 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:09:11.331967 5987 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.692742 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.707295 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.724778 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.736811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.737244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.737319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.737407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.737474 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.739695 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.755471 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.776412 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.840861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.840905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.840917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.840956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.840967 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.945100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.945159 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.945171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.945194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:14 crc kubenswrapper[4860]: I0121 21:09:14.945209 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:14Z","lastTransitionTime":"2026-01-21T21:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.048467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.048527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.048537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.048552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.048562 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.151421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.151458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.151467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.151481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.151491 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.200619 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:23:03.759653567 +0000 UTC Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.254058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.254102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.254113 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.254129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.254140 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.350130 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/1.log" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.354115 4860 scope.go:117] "RemoveContainer" containerID="816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de" Jan 21 21:09:15 crc kubenswrapper[4860]: E0121 21:09:15.354305 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.355916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.355962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.355973 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.355987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.355999 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.377847 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.400563 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.416441 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.439225 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.459080 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.459473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.459594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.459864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.460102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.460313 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.476145 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.500324 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.516032 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.532522 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.554284 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.563093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.563136 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.563151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.563169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.563182 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.569913 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.578363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.578413 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:15 crc kubenswrapper[4860]: E0121 21:09:15.579012 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:15 crc kubenswrapper[4860]: E0121 21:09:15.579015 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.589164 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.610232 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.635474 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.813471 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.813590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.814528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.814624 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.814643 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.826759 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.840711 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.918364 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.918468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.918480 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.918499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:15 crc kubenswrapper[4860]: I0121 21:09:15.918528 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:15Z","lastTransitionTime":"2026-01-21T21:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.021753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.021796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.021806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.021824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.021835 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.125384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.125488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.125512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.125548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.125573 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.201717 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:46:35.203056116 +0000 UTC Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.229145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.229212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.229226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.229248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.229265 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.331870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.331972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.331993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.332023 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.332047 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.436150 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.436228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.436249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.436279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.436299 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.540011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.540071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.540090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.540119 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.540139 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.578726 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.578843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:16 crc kubenswrapper[4860]: E0121 21:09:16.579158 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:16 crc kubenswrapper[4860]: E0121 21:09:16.579286 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.643894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.643996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.644013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.644045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.644066 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.748222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.748315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.748341 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.748377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.748403 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.852781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.852887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.852901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.852924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.852966 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.956494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.956580 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.956594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.956617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:16 crc kubenswrapper[4860]: I0121 21:09:16.956632 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:16Z","lastTransitionTime":"2026-01-21T21:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.060353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.060415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.060427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.060448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.060460 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.163827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.163874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.163886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.163908 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.163921 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.201915 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:15:47.239373861 +0000 UTC Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.268195 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.268264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.268279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.268308 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.268340 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.372148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.372217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.372233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.372258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.372274 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.475968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.476013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.476025 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.476048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.476062 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.578037 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.578139 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:17 crc kubenswrapper[4860]: E0121 21:09:17.578543 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:17 crc kubenswrapper[4860]: E0121 21:09:17.578765 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.580043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.580124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.580178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.580216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.580240 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.684197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.684271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.684283 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.684302 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.684313 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.787845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.787911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.787921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.787958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.787969 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.891299 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.891347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.891368 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.891393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.891409 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.994704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.994768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.994786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.994811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:17 crc kubenswrapper[4860]: I0121 21:09:17.994826 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:17Z","lastTransitionTime":"2026-01-21T21:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.039246 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:18 crc kubenswrapper[4860]: E0121 21:09:18.039505 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:18 crc kubenswrapper[4860]: E0121 21:09:18.039620 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:26.039587224 +0000 UTC m=+58.261765734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.098482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.098635 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.098664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.098709 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.098740 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.202257 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:27:35.444305635 +0000 UTC Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.204213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.204294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.204313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.204343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.204363 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.307500 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.307590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.307610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.307667 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.307684 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.410712 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.410769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.410785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.410812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.410827 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.513555 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.513617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.513634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.513659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.513676 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.578511 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.578583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:18 crc kubenswrapper[4860]: E0121 21:09:18.578721 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:18 crc kubenswrapper[4860]: E0121 21:09:18.578920 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.598464 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.615818 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.617742 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.617801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.617811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.617830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.617841 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.632556 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.650227 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.665924 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.676417 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.694911 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.713487 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.721730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.721792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.721806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.721825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.721838 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.730917 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.747812 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.761625 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.777062 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.789747 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.810870 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.821706 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.824660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.824717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.824730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.824745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.824755 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.835342 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.927985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.928365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.928441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.928515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:18 crc kubenswrapper[4860]: I0121 21:09:18.928579 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:18Z","lastTransitionTime":"2026-01-21T21:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.032161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.032254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.032277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.032311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.032337 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.136283 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.136336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.136348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.136368 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.136380 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.203449 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:15:51.495275387 +0000 UTC Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.241733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.241795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.241812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.241838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.241854 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.344720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.344780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.344793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.344812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.344824 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.448376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.448450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.448477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.448520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.448548 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.551425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.551500 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.551514 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.551535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.551551 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.578316 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.578352 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:19 crc kubenswrapper[4860]: E0121 21:09:19.578654 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:19 crc kubenswrapper[4860]: E0121 21:09:19.578849 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.654915 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.655006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.655016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.655029 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.655038 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.758469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.758546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.758563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.758589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.758605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.862530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.862612 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.862634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.862669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.862690 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.966490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.966546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.966561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.966585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:19 crc kubenswrapper[4860]: I0121 21:09:19.966599 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:19Z","lastTransitionTime":"2026-01-21T21:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.069823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.069859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.069869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.069885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.069894 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.172390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.172459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.172473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.172520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.172538 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.204239 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:22:18.767628648 +0000 UTC Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.276116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.276209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.276233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.276263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.276283 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.379779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.379841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.379865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.379896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.379916 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.484142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.484704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.484903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.485085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.485193 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.578547 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:20 crc kubenswrapper[4860]: E0121 21:09:20.578669 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.578548 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:20 crc kubenswrapper[4860]: E0121 21:09:20.578758 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.587881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.587954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.587971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.587986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.587998 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.691191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.691257 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.691270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.691305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.691318 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.793771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.793822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.793859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.793881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.793894 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.897170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.897219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.897234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.897254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:20 crc kubenswrapper[4860]: I0121 21:09:20.897268 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:20Z","lastTransitionTime":"2026-01-21T21:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.000277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.000310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.000319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.000332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.000341 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.111562 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.111603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.111616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.111634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.111649 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.204678 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:19:29.583281165 +0000 UTC Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.214674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.214753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.214766 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.214791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.214805 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.321045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.321526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.321686 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.322304 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.322861 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.426632 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.426718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.426744 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.426775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.426796 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.530279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.530361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.530383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.530410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.530426 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.578836 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.578883 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:21 crc kubenswrapper[4860]: E0121 21:09:21.579066 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:21 crc kubenswrapper[4860]: E0121 21:09:21.579216 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.633174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.633226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.633235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.633253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.633265 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.727875 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.736210 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.736249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.736261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.736279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.736292 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.742205 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.744484 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.760954 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.776477 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.791655 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.810978 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.825485 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.838481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.838535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.838550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.838569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.838581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.841825 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.857134 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.879890 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.892560 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.905649 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.925217 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.941042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.941088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.941098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.941115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.941127 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:21Z","lastTransitionTime":"2026-01-21T21:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.944176 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.958844 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.978801 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:21 crc kubenswrapper[4860]: I0121 21:09:21.997804 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.044809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.044861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.044874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.044896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.044911 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.149186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.149244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.149259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.149285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.149301 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.205330 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:01:58.975667723 +0000 UTC Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.252820 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.253330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.253847 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.254223 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.254330 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.358504 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.358871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.358967 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.359103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.359190 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.462537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.462989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.463102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.463202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.463306 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.566401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.566447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.566461 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.566485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.566500 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.578549 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.578627 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.578713 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.578893 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.671244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.671353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.671369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.671395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.671413 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.774741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.774801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.774811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.774832 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.774844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.793258 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.793428 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:09:54.793391393 +0000 UTC m=+87.015569883 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.878413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.878464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.878477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.878501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.878514 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.915697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.915782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.915806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.915839 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.915859 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.939769 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.946103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.946165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.946178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.946205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.946220 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.965767 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.971605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.971690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.971707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.971740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.971762 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:22Z","lastTransitionTime":"2026-01-21T21:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.995657 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.995789 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.995837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:22 crc kubenswrapper[4860]: I0121 21:09:22.995881 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.995977 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996085 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996115 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:54.996082385 +0000 UTC m=+87.218260995 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996189 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:54.996158187 +0000 UTC m=+87.218336847 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996255 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996335 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996360 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996280 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996428 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996456 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996471 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:54.996439346 +0000 UTC m=+87.218617826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996284 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:22 crc kubenswrapper[4860]: E0121 21:09:22.996534 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:54.996507028 +0000 UTC m=+87.218685528 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.003498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.003560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.003583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.003618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.003638 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: E0121 21:09:23.019922 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.024878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.024992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.025023 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.025060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.025085 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: E0121 21:09:23.043002 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:23 crc kubenswrapper[4860]: E0121 21:09:23.043218 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.045412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.045445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.045455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.045473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.045486 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.148866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.149031 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.149051 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.149072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.149087 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.206761 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:41:40.39788743 +0000 UTC Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.253633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.253837 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.253867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.253907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.253968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.356720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.356764 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.356776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.356796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.356807 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.461124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.461212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.461226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.461251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.461271 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.564281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.564365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.564385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.564413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.564433 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.578537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.578537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:23 crc kubenswrapper[4860]: E0121 21:09:23.578744 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:23 crc kubenswrapper[4860]: E0121 21:09:23.578841 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.667842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.667889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.667897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.667916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.667943 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.770501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.770547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.770556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.770574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.770584 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.873984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.874041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.874058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.874079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.874092 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.977078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.977168 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.977187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.977219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:23 crc kubenswrapper[4860]: I0121 21:09:23.977239 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:23Z","lastTransitionTime":"2026-01-21T21:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.079577 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.079630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.079645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.079665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.079678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.182105 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.182149 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.182160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.182178 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.182192 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.207619 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:02:16.235935944 +0000 UTC Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.284585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.284654 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.284672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.284690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.284706 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.386614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.386652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.386661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.386676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.386685 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.489944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.489993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.490007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.490024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.490041 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.578323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.578323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:24 crc kubenswrapper[4860]: E0121 21:09:24.578473 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:24 crc kubenswrapper[4860]: E0121 21:09:24.578520 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.592917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.592991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.593007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.593032 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.593050 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.695496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.695549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.695566 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.695590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.695605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.802815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.802871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.802889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.802913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.802929 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.905613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.905668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.905680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.905695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:24 crc kubenswrapper[4860]: I0121 21:09:24.905704 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:24Z","lastTransitionTime":"2026-01-21T21:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.008574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.008628 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.008640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.008659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.008678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.110741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.110801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.110820 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.110849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.110864 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.208232 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:58:22.523075653 +0000 UTC Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.213758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.213802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.213816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.213834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.213846 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.316183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.316226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.316244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.316263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.316275 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.418706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.418772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.418785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.418805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.418819 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.522298 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.522358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.522371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.522389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.522404 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.578827 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:25 crc kubenswrapper[4860]: E0121 21:09:25.579041 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.579132 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:25 crc kubenswrapper[4860]: E0121 21:09:25.579390 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.625117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.625160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.625172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.625190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.625203 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.727473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.727507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.727516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.727533 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.727545 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.829785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.829825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.829834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.829848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.829857 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.933274 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.933339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.933352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.933367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:25 crc kubenswrapper[4860]: I0121 21:09:25.933379 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:25Z","lastTransitionTime":"2026-01-21T21:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.035958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.036007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.036018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.036036 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.036047 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.129045 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:26 crc kubenswrapper[4860]: E0121 21:09:26.129224 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:26 crc kubenswrapper[4860]: E0121 21:09:26.129285 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:09:42.12926835 +0000 UTC m=+74.351446810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.138170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.138207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.138216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.138232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.138241 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.208627 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:03:19.565196123 +0000 UTC Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.241227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.241266 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.241276 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.241291 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.241300 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.345067 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.345141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.345161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.345186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.345213 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.447889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.447944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.447954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.447968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.447979 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.550875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.550945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.550964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.550982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.550993 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.578593 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.578611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:26 crc kubenswrapper[4860]: E0121 21:09:26.578752 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:26 crc kubenswrapper[4860]: E0121 21:09:26.579336 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.653830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.653882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.653892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.653910 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.653922 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.756877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.756922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.756945 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.756961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.756973 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.858781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.858830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.858841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.858857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.858869 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.962380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.962427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.962440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.962459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:26 crc kubenswrapper[4860]: I0121 21:09:26.962473 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:26Z","lastTransitionTime":"2026-01-21T21:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.065097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.065140 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.065151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.065168 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.065179 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.168708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.168756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.168770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.168790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.168804 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.209691 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:16:07.568232499 +0000 UTC Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.270974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.271018 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.271030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.271046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.271059 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.373589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.374102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.374786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.374806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.374818 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.477535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.477575 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.477585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.477599 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.477609 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.578307 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.578370 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:27 crc kubenswrapper[4860]: E0121 21:09:27.578455 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:27 crc kubenswrapper[4860]: E0121 21:09:27.578544 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.579858 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.579890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.579899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.579913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.579923 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.682571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.682621 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.682630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.682647 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.682657 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.785183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.785235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.785248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.785271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.785285 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.887770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.887841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.887868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.887899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.887922 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.990768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.990829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.990846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.990895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:27 crc kubenswrapper[4860]: I0121 21:09:27.990910 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:27Z","lastTransitionTime":"2026-01-21T21:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.094065 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.094135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.094154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.094179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.094196 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.197010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.197068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.197082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.197108 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.197127 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.210563 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:56:44.729899222 +0000 UTC Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.300142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.300203 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.300223 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.300250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.300267 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.404118 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.404184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.404201 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.404268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.404287 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.512267 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.512344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.512366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.512420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.512443 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.578046 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.578173 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:28 crc kubenswrapper[4860]: E0121 21:09:28.578266 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:28 crc kubenswrapper[4860]: E0121 21:09:28.578471 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.600752 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.615651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.615717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.615738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.615769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.615788 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.639183 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.661203 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.690390 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.714529 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.719311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.719353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.719384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.719404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.719423 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.738359 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.759372 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.779334 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.796823 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.812792 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.822801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.822866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.822880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.822904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.822919 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.831064 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.846721 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.865306 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.883068 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.910575 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.925994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.926056 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.926069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.926090 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.926110 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:28Z","lastTransitionTime":"2026-01-21T21:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.941303 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:28 crc kubenswrapper[4860]: I0121 21:09:28.965107 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.029100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.029152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.029163 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.029182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.029192 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.133927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.134041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.134085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.134117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.134134 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.211800 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:08:30.13409957 +0000 UTC Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.237309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.237388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.237398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.237422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.237435 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.342501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.342566 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.342586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.342614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.342634 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.449785 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.449848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.449860 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.449881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.449894 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.553645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.553701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.553711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.553767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.553780 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.578961 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.579062 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:29 crc kubenswrapper[4860]: E0121 21:09:29.579189 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:29 crc kubenswrapper[4860]: E0121 21:09:29.579375 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.657517 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.657613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.657633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.657662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.657679 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.762289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.762383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.762411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.762444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.762469 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.866158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.866219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.866230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.866248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.866264 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.969971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.970020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.970034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.970054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:29 crc kubenswrapper[4860]: I0121 21:09:29.970066 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:29Z","lastTransitionTime":"2026-01-21T21:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.073209 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.073279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.073296 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.073317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.073335 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.176601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.176666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.176679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.176700 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.176714 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.212489 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:44:48.840170512 +0000 UTC Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.280620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.280666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.280676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.280692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.280702 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.384321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.384390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.384410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.384440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.384472 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.487927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.488014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.488028 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.488053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.488067 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.578808 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:30 crc kubenswrapper[4860]: E0121 21:09:30.579038 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.579105 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:30 crc kubenswrapper[4860]: E0121 21:09:30.579319 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.580098 4860 scope.go:117] "RemoveContainer" containerID="816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.605757 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.605821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.605836 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.605863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.605881 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.710403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.710864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.710883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.710910 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.710928 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.814137 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.814186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.814202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.814227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.814243 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.918372 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.918448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.918470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.918502 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:30 crc kubenswrapper[4860]: I0121 21:09:30.918522 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:30Z","lastTransitionTime":"2026-01-21T21:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.021621 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.021668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.021678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.021693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.021705 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.125236 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.125315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.125335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.125373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.125399 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.213523 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:11:15.513982155 +0000 UTC Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.228793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.228897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.228911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.228946 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.228965 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.331407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.331466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.331484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.331512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.331532 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.418868 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/1.log" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.422112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.422570 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.433763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.433804 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.433817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.433834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.433847 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.443200 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.463860 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.480747 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.495665 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.509237 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.525111 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.536669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.536698 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.536706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.536721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.536732 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.540192 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.554929 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.572727 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.577878 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.577966 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:31 crc kubenswrapper[4860]: E0121 21:09:31.578075 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:31 crc kubenswrapper[4860]: E0121 21:09:31.578152 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.586851 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.612193 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.627289 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.639458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.639536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.639551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.639570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.639866 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.643690 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.662130 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.677553 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.695491 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.710428 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.742532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.742582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.742593 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.742610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.742621 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.845307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.845383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.845401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.845851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.845899 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.949443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.949493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.949516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.949537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:31 crc kubenswrapper[4860]: I0121 21:09:31.949550 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:31Z","lastTransitionTime":"2026-01-21T21:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.053135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.053187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.053198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.053219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.053231 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.156633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.156688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.156702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.156724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.156738 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.213890 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:07:52.621666342 +0000 UTC Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.260001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.260048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.260058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.260074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.260086 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.363446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.363486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.363495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.363510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.363520 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.429081 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/2.log" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.430357 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/1.log" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.434455 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" exitCode=1 Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.434513 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.434568 4860 scope.go:117] "RemoveContainer" containerID="816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.436586 4860 scope.go:117] "RemoveContainer" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" Jan 21 21:09:32 crc kubenswrapper[4860]: E0121 21:09:32.437275 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.451814 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.467167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.467234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.467251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.467279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.467297 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.478589 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816a09597516e0a6d6e5d621858073f61af5a9dad3fb66937f7dd9de751565de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:13Z\\\",\\\"message\\\":\\\"e-config-operator/machine-config-daemon-w47lx\\\\nI0121 21:09:13.484329 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0121 21:09:13.484114 6302 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c in node crc\\\\nI0121 21:09:13.484340 6302 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0121 21:09:13.484351 6302 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-pzw2c after 0 failed attempt(s)\\\\nI0121 21:09:13.484365 6302 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-pzw2c\\\\nI0121 21:09:13.484318 6302 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 21:09:13.484151 6302 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0121 21:09:13.484389 6302 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0121 21:09:13.484391 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.493044 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.510038 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.530269 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.546071 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.564100 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.569490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.569544 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.569560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.569583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.569600 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.579164 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.579179 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:32 crc kubenswrapper[4860]: E0121 21:09:32.579366 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:32 crc kubenswrapper[4860]: E0121 21:09:32.579463 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.580785 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.597662 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.610727 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.626042 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.640437 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.654330 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.667106 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.671973 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.672045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.672059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.672084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.672103 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.683058 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.698057 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.712038 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.774850 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.774922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.774957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.774983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.774999 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.878179 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.878241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.878261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.878294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.878315 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.983187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.983268 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.983292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.983323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:32 crc kubenswrapper[4860]: I0121 21:09:32.983345 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:32Z","lastTransitionTime":"2026-01-21T21:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.086211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.086255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.086271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.086293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.086306 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.188603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.188645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.188658 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.188676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.188689 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.214921 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:28:40.542502559 +0000 UTC Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.291273 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.291323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.291335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.291354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.291368 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.394576 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.394637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.394651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.394672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.394685 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.401334 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.401380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.401391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.401408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.401422 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.418130 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.423329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.423537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.423668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.423829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.423968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.441420 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.443947 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/2.log" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.446699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.446745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.446756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.446776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.446789 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.449851 4860 scope.go:117] "RemoveContainer" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.450635 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.461497 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.465628 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.467997 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.468029 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.468040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.468059 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.468075 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.482473 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.483412 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.487294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.487353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.487381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.487400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.487419 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.500293 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.501277 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.501439 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.503989 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.504030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.504042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.504063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.504075 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.518272 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.535840 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.551772 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.563545 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.578537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.578667 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.578731 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:33 crc kubenswrapper[4860]: E0121 21:09:33.578906 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.584097 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.596777 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.607408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.607444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.607457 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.607475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.607487 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.609815 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.623431 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.639408 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.654606 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.667669 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.681006 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.691775 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.701886 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.709667 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.709694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.709707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.709729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.709742 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.813122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.813174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.813194 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.813224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.813248 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.916131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.916180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.916190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.916206 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:33 crc kubenswrapper[4860]: I0121 21:09:33.916217 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:33Z","lastTransitionTime":"2026-01-21T21:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.020154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.020210 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.020225 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.020244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.020264 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.123745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.123834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.123845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.123865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.123879 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.215225 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:23:57.559582178 +0000 UTC Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.226455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.226518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.226530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.226553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.226567 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.330177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.330217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.330228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.330245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.330257 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.432810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.432864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.432877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.432899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.432914 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.536441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.536495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.536507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.536528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.536543 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.578109 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.578185 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:34 crc kubenswrapper[4860]: E0121 21:09:34.578306 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:34 crc kubenswrapper[4860]: E0121 21:09:34.578385 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.638387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.638467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.638479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.638499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.638512 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.740654 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.740683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.740691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.740705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.740714 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.844251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.844597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.844699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.844806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.844888 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.948634 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.948688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.948699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.948717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:34 crc kubenswrapper[4860]: I0121 21:09:34.948728 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:34Z","lastTransitionTime":"2026-01-21T21:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.051795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.051845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.051857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.051882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.051896 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.154214 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.154987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.155077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.155155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.155225 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.215785 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:40:10.865970756 +0000 UTC Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.258807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.258852 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.258862 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.258879 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.258893 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.466957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.467024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.467034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.467049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.467065 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.569438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.569497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.569510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.569537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.569550 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.577973 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.578117 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:35 crc kubenswrapper[4860]: E0121 21:09:35.578231 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:35 crc kubenswrapper[4860]: E0121 21:09:35.578418 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.672082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.672127 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.672140 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.672160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.672177 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.774821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.774872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.774887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.774911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.774923 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.877835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.877877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.877887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.877904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.877914 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.980920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.980979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.980991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.981008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:35 crc kubenswrapper[4860]: I0121 21:09:35.981024 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:35Z","lastTransitionTime":"2026-01-21T21:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.084373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.084424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.084437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.084454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.084467 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.188252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.188295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.188307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.188325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.188336 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.216695 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:27:12.926809543 +0000 UTC Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.291494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.291543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.291553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.291574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.291584 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.394606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.394668 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.394680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.394704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.394717 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.498129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.498199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.498210 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.498227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.498238 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.578574 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.578841 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:36 crc kubenswrapper[4860]: E0121 21:09:36.578997 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:36 crc kubenswrapper[4860]: E0121 21:09:36.579216 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.602234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.602464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.602499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.602530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.602549 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.705707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.705767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.705783 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.705809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.705822 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.809449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.809501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.809513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.809536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.809550 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.912851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.912981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.913003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.913035 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:36 crc kubenswrapper[4860]: I0121 21:09:36.913056 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:36Z","lastTransitionTime":"2026-01-21T21:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.146924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.147014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.147035 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.147075 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.147090 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.217245 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:57:07.091760094 +0000 UTC Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.250426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.250457 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.250467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.250494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.250506 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.353704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.353770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.353789 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.353820 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.353839 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.456490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.456568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.456580 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.456602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.456615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.561515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.561620 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.561639 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.561672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.561692 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.578650 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.578901 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:37 crc kubenswrapper[4860]: E0121 21:09:37.578959 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:37 crc kubenswrapper[4860]: E0121 21:09:37.579234 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.664814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.664860 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.664871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.664885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.664896 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.768165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.768216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.768227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.768245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.768259 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.871781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.871828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.871838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.871857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.871895 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.974870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.974917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.974928 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.974972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:37 crc kubenswrapper[4860]: I0121 21:09:37.974987 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:37Z","lastTransitionTime":"2026-01-21T21:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.077787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.077836 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.077845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.077861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.077872 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.180279 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.180321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.180336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.180353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.180365 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.217752 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:36:55.825703273 +0000 UTC Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.283512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.283553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.283564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.283583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.283594 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.386322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.386370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.386378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.386396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.386405 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.489610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.489883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.489929 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.490050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.490142 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.579121 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.579128 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:38 crc kubenswrapper[4860]: E0121 21:09:38.579602 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:38 crc kubenswrapper[4860]: E0121 21:09:38.579863 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.592784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.593308 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.593477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.593710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.593907 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.599434 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.616247 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.628182 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.644108 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.659068 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.672426 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.686369 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.697157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.697226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.697239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.697283 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.697297 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.700293 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.711788 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.722203 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.766368 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.785119 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.799552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.799592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.799602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.799616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.799625 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.804428 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.818204 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.829681 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.840717 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.854914 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.901248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.901293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.901313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.901331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:38 crc kubenswrapper[4860]: I0121 21:09:38.901343 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:38Z","lastTransitionTime":"2026-01-21T21:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.003928 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.004054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.004066 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.004083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.004093 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.106610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.106653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.106664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.106681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.106695 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.208992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.209034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.209045 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.209062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.209074 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.218480 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:28:35.414567051 +0000 UTC Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.312250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.312352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.312362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.312377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.312388 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.415082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.415137 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.415155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.415180 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.415197 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.518132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.518175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.518190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.518207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.518218 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.578076 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.578076 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:39 crc kubenswrapper[4860]: E0121 21:09:39.578234 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:39 crc kubenswrapper[4860]: E0121 21:09:39.578349 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.620774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.621032 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.621049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.621069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.621081 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.723683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.723737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.723751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.723769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.723782 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.826679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.826723 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.826733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.826754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.826762 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.929647 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.929695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.929705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.929725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:39 crc kubenswrapper[4860]: I0121 21:09:39.929737 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:39Z","lastTransitionTime":"2026-01-21T21:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.032846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.032889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.032898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.032917 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.032943 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.136091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.136138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.136151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.136171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.136183 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.219193 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:36:32.727947209 +0000 UTC Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.238078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.238426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.238436 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.238450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.238459 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.342369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.342410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.342421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.342436 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.342448 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.445347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.445409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.445422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.445449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.445462 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.549963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.550010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.550021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.550039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.550055 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.579805 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:40 crc kubenswrapper[4860]: E0121 21:09:40.579979 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.580620 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:40 crc kubenswrapper[4860]: E0121 21:09:40.580728 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.654263 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.654312 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.654325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.654344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.654355 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.757430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.757479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.757488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.757508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.757520 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.860282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.860311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.860323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.860338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.860348 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.963310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.963359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.963371 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.963389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:40 crc kubenswrapper[4860]: I0121 21:09:40.963402 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:40Z","lastTransitionTime":"2026-01-21T21:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.066591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.066653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.066666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.066687 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.066699 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.172406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.172458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.172467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.172507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.172524 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.219954 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:50:59.497566794 +0000 UTC Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.277468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.277542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.277557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.277579 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.277600 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.381058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.381128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.381139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.381159 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.381171 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.484556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.484636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.484655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.484683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.484703 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.578617 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:41 crc kubenswrapper[4860]: E0121 21:09:41.579034 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.578617 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:41 crc kubenswrapper[4860]: E0121 21:09:41.579358 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.587239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.587272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.587282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.587295 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.587305 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.690520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.690581 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.690596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.690623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.690637 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.794115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.794170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.794183 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.794205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.794221 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.897317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.897364 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.897377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.897396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:41 crc kubenswrapper[4860]: I0121 21:09:41.897409 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:41Z","lastTransitionTime":"2026-01-21T21:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.000740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.000786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.000799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.000817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.000829 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.103132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.103525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.103655 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.103797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.104004 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.195348 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:42 crc kubenswrapper[4860]: E0121 21:09:42.195863 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:42 crc kubenswrapper[4860]: E0121 21:09:42.196066 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:10:14.196015713 +0000 UTC m=+106.418194333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.206672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.206708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.206717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.206731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.206746 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.220314 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:26:17.914271085 +0000 UTC Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.310400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.310446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.310458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.310478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.310493 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.413355 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.413413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.413484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.413509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.413525 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.515622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.515676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.515690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.515706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.515719 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.578032 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.578067 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:42 crc kubenswrapper[4860]: E0121 21:09:42.578215 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:42 crc kubenswrapper[4860]: E0121 21:09:42.578346 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.618906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.618969 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.618981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.619000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.619014 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.722389 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.722479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.722505 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.722541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.722565 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.826972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.827043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.827073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.827131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.827161 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.930336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.930428 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.930486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.930524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:42 crc kubenswrapper[4860]: I0121 21:09:42.930552 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:42Z","lastTransitionTime":"2026-01-21T21:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.034001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.034057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.034068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.034088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.034100 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.137827 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.137907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.137921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.137968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.137989 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.221010 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:15:04.225517361 +0000 UTC Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.242241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.242325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.242350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.242382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.242402 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.346814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.346885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.346900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.346925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.346967 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.450220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.450300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.450320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.450354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.450374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.553728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.553819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.553839 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.553871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.553891 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.578239 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.578331 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.578489 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.578654 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.657255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.657292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.657305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.657323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.657335 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.737875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.737927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.737963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.737983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.737996 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.759562 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:43Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.764998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.765226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.765277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.765354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.765444 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.783058 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:43Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.788605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.788667 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.788687 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.788716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.788737 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.807541 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:43Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.813411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.813477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.813495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.813519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.813555 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.828907 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:43Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.834474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.834536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.834556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.834582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.834598 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.853489 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:43Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:43 crc kubenswrapper[4860]: E0121 21:09:43.853616 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.855944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.855993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.856017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.856041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.856054 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.958961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.959013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.959026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.959044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:43 crc kubenswrapper[4860]: I0121 21:09:43.959058 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:43Z","lastTransitionTime":"2026-01-21T21:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.062020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.062074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.062089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.062107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.062120 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.165313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.165401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.165421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.165454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.165475 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.222183 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:49:35.903043818 +0000 UTC Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.292145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.292401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.292448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.292589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.292683 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.400744 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.400810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.400825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.400861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.400875 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.504398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.504487 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.504516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.504545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.504605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.517586 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/0.log" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.517693 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2a7ca69-9cb5-41b5-9213-72165a9fc8e1" containerID="0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168" exitCode=1 Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.517782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerDied","Data":"0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.518715 4860 scope.go:117] "RemoveContainer" containerID="0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.536124 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.556036 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.577963 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.578265 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.578284 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:44 crc kubenswrapper[4860]: E0121 21:09:44.578451 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:44 crc kubenswrapper[4860]: E0121 21:09:44.578742 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.596915 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.607060 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.607116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.607129 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.607149 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.607162 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.616167 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.630002 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.644600 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.656151 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.678433 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.695104 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.707385 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.718379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.718437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.718447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.718461 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.718472 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.731216 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.745261 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.761099 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.777830 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.791721 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.804795 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:44Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.821403 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.821778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.821878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.821987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.822078 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.924977 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.925529 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.925619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.925706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:44 crc kubenswrapper[4860]: I0121 21:09:44.925809 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:44Z","lastTransitionTime":"2026-01-21T21:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.029043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.029097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.029107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.029124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.029175 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.133193 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.133255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.133265 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.133287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.133301 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.223246 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:33:03.300521347 +0000 UTC Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.236857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.236924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.236979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.237011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.237030 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.340708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.340765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.340777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.340796 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.340807 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.444663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.444718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.444727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.444854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.444876 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.526539 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/0.log" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.526596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerStarted","Data":"ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.546520 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.548829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.548896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.548922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.548971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.548984 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.563426 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.576124 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.578347 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.578439 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:45 crc kubenswrapper[4860]: E0121 21:09:45.578521 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:45 crc kubenswrapper[4860]: E0121 21:09:45.578629 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.589587 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.603248 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.617728 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.633969 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.652106 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.652165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.652181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.652207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.652224 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.653571 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.671167 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.687092 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.708220 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.725319 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.741270 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.755625 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.755685 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.755700 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.755721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.755738 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.762407 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.779406 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.794504 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.812798 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:45Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.858125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.858175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.858187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.858244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.858259 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.961015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.961061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.961073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.961092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:45 crc kubenswrapper[4860]: I0121 21:09:45.961102 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:45Z","lastTransitionTime":"2026-01-21T21:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.064864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.064909 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.064918 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.064959 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.064971 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.167476 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.167523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.167534 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.167604 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.167622 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.223537 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:08:56.53258628 +0000 UTC Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.271332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.271388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.271400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.271425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.271440 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.374503 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.374556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.374574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.374599 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.374616 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.477591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.477646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.477659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.477678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.477691 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.578286 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.578747 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:46 crc kubenswrapper[4860]: E0121 21:09:46.578915 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:46 crc kubenswrapper[4860]: E0121 21:09:46.579339 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580284 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580398 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.580816 4860 scope.go:117] "RemoveContainer" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" Jan 21 21:09:46 crc kubenswrapper[4860]: E0121 21:09:46.581392 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.597349 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.683285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.683703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.683792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.683962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.684035 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.787327 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.787376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.787392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.787418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.787432 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.890618 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.890738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.890762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.890811 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.890842 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.994252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.994307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.994318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.994340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:46 crc kubenswrapper[4860]: I0121 21:09:46.994355 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:46Z","lastTransitionTime":"2026-01-21T21:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.098057 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.098125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.098147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.098174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.098191 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.201594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.201678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.201724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.201764 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.201790 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.224339 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:21:58.710598928 +0000 UTC Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.305552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.305602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.305614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.305637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.305653 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.409250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.409326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.409349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.409402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.409429 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.512784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.512834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.512857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.512884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.512902 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.578597 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.578617 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:47 crc kubenswrapper[4860]: E0121 21:09:47.578732 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:47 crc kubenswrapper[4860]: E0121 21:09:47.578815 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.617199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.617365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.617676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.617740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.617769 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.722053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.722141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.722167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.722205 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.722234 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.826252 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.826330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.826362 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.826397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.826426 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.929876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.929994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.930014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.930039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:47 crc kubenswrapper[4860]: I0121 21:09:47.930056 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:47Z","lastTransitionTime":"2026-01-21T21:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.033495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.033543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.033552 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.033569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.033607 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.137636 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.137725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.137751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.137787 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.137813 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.225069 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:11:49.449748738 +0000 UTC Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.241994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.242048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.242058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.242080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.242094 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.346443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.346535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.346559 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.346594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.346616 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.450302 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.450410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.450425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.450453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.450469 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.554696 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.554782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.554807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.554838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.554860 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.578459 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:48 crc kubenswrapper[4860]: E0121 21:09:48.579014 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.579705 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:48 crc kubenswrapper[4860]: E0121 21:09:48.579851 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.603483 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.626053 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.662142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.662543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.662651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.662749 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.662838 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.664591 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.690922 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.712874 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.733162 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.748240 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765151 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.765635 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.779321 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.799544 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.814899 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.830962 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.849584 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.867661 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.868270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.868396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.868512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.868601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.868684 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.883237 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.899156 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.922645 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.939196 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.972391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.972435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.972445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.972463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:48 crc kubenswrapper[4860]: I0121 21:09:48.972476 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:48Z","lastTransitionTime":"2026-01-21T21:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.075251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.075582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.075884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.076184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.076456 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.179073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.179135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.179150 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.179167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.179182 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.226023 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:40:45.929792199 +0000 UTC Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.282318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.282626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.282767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.282866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.282979 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.386518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.386887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.387103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.387286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.387487 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.490824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.490882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.490901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.490925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.490968 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.579057 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.579158 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:49 crc kubenswrapper[4860]: E0121 21:09:49.579350 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:49 crc kubenswrapper[4860]: E0121 21:09:49.580028 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.594477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.594558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.594578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.594609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.594630 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.697746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.697829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.697857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.697890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.697914 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.801526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.801576 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.801589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.801610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.801624 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.903974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.904027 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.904039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.904218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:49 crc kubenswrapper[4860]: I0121 21:09:49.904242 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:49Z","lastTransitionTime":"2026-01-21T21:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.006308 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.006342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.006351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.006367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.006377 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.109251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.109300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.109319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.109343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.109358 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.213271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.213317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.213333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.213350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.213361 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.226598 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:40:26.402551298 +0000 UTC Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.316434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.316542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.316605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.316631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.316692 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.420453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.420500 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.420511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.420531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.420543 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.523484 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.523526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.523536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.523553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.523564 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.578970 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:50 crc kubenswrapper[4860]: E0121 21:09:50.579163 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.579215 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:50 crc kubenswrapper[4860]: E0121 21:09:50.579315 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.653186 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.653235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.653244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.653265 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.653276 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.755669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.755726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.755738 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.755856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.755874 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.858077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.858115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.858125 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.858140 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.858149 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.960393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.960438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.960448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.960464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:50 crc kubenswrapper[4860]: I0121 21:09:50.960475 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:50Z","lastTransitionTime":"2026-01-21T21:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.063097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.063153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.063167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.063187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.063203 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.165562 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.165603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.165613 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.165630 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.165642 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.227759 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:36:42.33728755 +0000 UTC Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.268350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.268411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.268430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.268456 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.268476 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.554465 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.554617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.554631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.554652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.554668 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.578093 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.578209 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:51 crc kubenswrapper[4860]: E0121 21:09:51.578273 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:51 crc kubenswrapper[4860]: E0121 21:09:51.578435 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.658321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.658386 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.658402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.658426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.658449 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.761480 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.761530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.761548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.761567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.761580 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.864889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.864983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.864993 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.865011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.865033 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.966882 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.966923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.966937 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.966976 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:51 crc kubenswrapper[4860]: I0121 21:09:51.966988 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:51Z","lastTransitionTime":"2026-01-21T21:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.074393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.074459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.074470 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.074509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.074520 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.178396 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.178483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.178507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.178540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.178560 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.229030 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:32:51.443645602 +0000 UTC Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.282571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.282650 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.282669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.282696 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.282716 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.386217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.386258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.386270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.386288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.386301 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.489301 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.489391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.489420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.489459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.489481 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.578519 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:52 crc kubenswrapper[4860]: E0121 21:09:52.578677 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.578700 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:52 crc kubenswrapper[4860]: E0121 21:09:52.578894 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.592232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.592286 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.592301 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.592318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.592331 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.694856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.694923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.694979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.695016 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.695040 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.798167 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.798234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.798243 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.798453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.798463 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.901692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.901735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.901747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.901766 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:52 crc kubenswrapper[4860]: I0121 21:09:52.901779 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:52Z","lastTransitionTime":"2026-01-21T21:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.004609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.004674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.004693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.004718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.004735 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.107777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.107814 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.107824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.107838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.107848 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.210868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.210971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.210991 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.211014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.211031 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.230469 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:03:24.174717213 +0000 UTC Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.314762 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.314808 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.314833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.314856 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.314876 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.417165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.417215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.417230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.417247 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.417256 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.519331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.519451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.519478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.519516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.519541 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.578666 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.578784 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:53 crc kubenswrapper[4860]: E0121 21:09:53.578804 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:53 crc kubenswrapper[4860]: E0121 21:09:53.578979 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.622464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.622516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.622530 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.622549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.622564 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.725198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.725246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.725255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.725272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.725287 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.827617 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.827701 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.827724 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.827751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.827772 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.931245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.931370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.931391 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.931419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:53 crc kubenswrapper[4860]: I0121 21:09:53.931445 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:53Z","lastTransitionTime":"2026-01-21T21:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.014914 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.015026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.015047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.015079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.015108 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.040102 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:54Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.046236 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.046297 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.046316 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.046341 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.046359 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.073146 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:54Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.080160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.080216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.080235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.080256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.080271 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.109600 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:54Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.116541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.116705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.116741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.116778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.116841 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.139377 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:54Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.144951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.144990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.145033 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.145054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.145067 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.166611 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:54Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.166771 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.169446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.169492 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.169507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.169537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.169553 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.231128 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:27:40.373976537 +0000 UTC Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.272343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.272382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.272573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.272591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.272605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.376054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.376091 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.376102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.376120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.376132 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.478673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.478725 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.478735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.478753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.478775 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.578555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.578577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.578771 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.579073 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.581048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.581107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.581127 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.581153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.581171 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.685255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.685318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.685338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.685363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.685382 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.787923 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.787983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.787997 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.788014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.788025 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.887646 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:09:54 crc kubenswrapper[4860]: E0121 21:09:54.888309 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:10:58.888230437 +0000 UTC m=+151.110408957 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.890075 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.890103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.890114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.890132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.890143 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.992675 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.992708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.992718 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.992731 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:54 crc kubenswrapper[4860]: I0121 21:09:54.992740 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:54Z","lastTransitionTime":"2026-01-21T21:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.111682 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.111728 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.111755 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.111799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.111922 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.111989 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:10:59.111974506 +0000 UTC m=+151.334152976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112024 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112060 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112079 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112156 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:10:59.112132421 +0000 UTC m=+151.334310881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112220 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112243 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112258 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112317 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:10:59.112304016 +0000 UTC m=+151.334482686 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112336 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.112459 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:10:59.11243152 +0000 UTC m=+151.334610000 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.114007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.114036 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.114047 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.114064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.114074 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.216496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.216542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.216551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.216567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.216578 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.232011 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:40:38.977175749 +0000 UTC Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.319502 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.319555 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.319568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.319592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.319606 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.422607 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.422650 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.422663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.422681 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.422693 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.524802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.524847 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.524859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.524878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.524890 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.577748 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.577799 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.578008 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:55 crc kubenswrapper[4860]: E0121 21:09:55.578188 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.627281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.627322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.627333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.627349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.627358 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.730879 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.731212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.731409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.731554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.731689 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.834958 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.835026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.835036 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.835053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.835065 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.939055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.939098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.939107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.939122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:55 crc kubenswrapper[4860]: I0121 21:09:55.939137 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:55Z","lastTransitionTime":"2026-01-21T21:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.042352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.042454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.042474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.042507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.042527 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.145754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.145803 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.145815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.145833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.145844 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.232606 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:20:50.467251546 +0000 UTC Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.249251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.249299 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.249313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.249345 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.249361 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.352102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.352187 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.352200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.352224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.352238 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.456719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.457412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.457690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.457921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.458196 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.561553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.561669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.561683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.561706 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.561721 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.577981 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.578047 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:56 crc kubenswrapper[4860]: E0121 21:09:56.578286 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:56 crc kubenswrapper[4860]: E0121 21:09:56.578508 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.664442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.664498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.664511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.664535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.664548 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.768006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.768080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.768095 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.768126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.768140 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.870663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.870716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.870735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.870759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.870777 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.982495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.982759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.982775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.982799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:56 crc kubenswrapper[4860]: I0121 21:09:56.982813 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:56Z","lastTransitionTime":"2026-01-21T21:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.086759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.086835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.086846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.086877 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.086889 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.191461 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.191512 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.191524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.191542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.191556 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.233340 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:56:06.984748252 +0000 UTC Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.295881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.295964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.295979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.295999 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.296014 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.399326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.399412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.399431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.399462 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.399483 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.502948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.503013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.503030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.503055 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.503069 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.578578 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.578578 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:57 crc kubenswrapper[4860]: E0121 21:09:57.578893 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:57 crc kubenswrapper[4860]: E0121 21:09:57.579257 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.607272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.607336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.607358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.607387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.607410 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.711494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.711546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.711563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.711586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.711601 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.815901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.816409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.816422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.816447 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.816464 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.921213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.921306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.921331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.921367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:57 crc kubenswrapper[4860]: I0121 21:09:57.921391 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:57Z","lastTransitionTime":"2026-01-21T21:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.025300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.025367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.025388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.025416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.025431 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.129915 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.130024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.130040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.130072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.130089 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233111 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233193 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233208 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.233563 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:52:03.184651586 +0000 UTC Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.337156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.337219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.337232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.337261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.337276 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.440809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.440903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.440927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.440998 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.441020 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.545303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.545397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.545417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.545450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.545472 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.578354 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.578354 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:09:58 crc kubenswrapper[4860]: E0121 21:09:58.578651 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:09:58 crc kubenswrapper[4860]: E0121 21:09:58.578905 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.600369 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.622417 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.651486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.651531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.651547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.651570 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.651586 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.668215 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.692609 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.710676 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.738875 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.754550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.754609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.754622 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.754641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.754655 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.759552 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.776985 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.801824 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.825613 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.845017 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.857644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.857713 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.857728 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.857758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.857773 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.861021 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.880304 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.900754 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.923641 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.943776 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.961100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.961152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.961166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.961185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.961197 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:58Z","lastTransitionTime":"2026-01-21T21:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.966823 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:58 crc kubenswrapper[4860]: I0121 21:09:58.986825 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.064196 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.064241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.064251 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.064269 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.064280 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.166737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.166793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.166805 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.166826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.166838 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.234558 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:51:40.962020608 +0000 UTC Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.269170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.269254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.269270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.269293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.269307 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.371899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.371953 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.371966 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.371984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.371998 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.474039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.474085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.474098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.474117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.474130 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577818 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.577833 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.578555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:09:59 crc kubenswrapper[4860]: E0121 21:09:59.578687 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.578807 4860 scope.go:117] "RemoveContainer" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" Jan 21 21:09:59 crc kubenswrapper[4860]: E0121 21:09:59.578865 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.681416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.681467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.681477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.681498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.681512 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.784352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.784441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.784461 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.784493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.784515 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.887965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.888006 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.888015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.888034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.888043 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.993704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.993815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.993838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.993870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:09:59 crc kubenswrapper[4860]: I0121 21:09:59.993972 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:09:59Z","lastTransitionTime":"2026-01-21T21:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.097790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.097842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.097855 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.097875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.097886 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.202093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.202141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.202151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.202170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.202182 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.234866 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:56:01.729952146 +0000 UTC Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.305836 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.305895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.305913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.305980 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.306003 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.409520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.409594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.409612 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.409644 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.409662 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.512510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.512575 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.512589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.512615 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.512631 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.578536 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.578857 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:00 crc kubenswrapper[4860]: E0121 21:10:00.579116 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:00 crc kubenswrapper[4860]: E0121 21:10:00.579251 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.603959 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/2.log" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.604249 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.607130 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.608534 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.615557 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.615591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.615601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.615616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.615627 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.628963 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.648632 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.665924 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.693599 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.714381 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.718848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.718884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.718896 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.718913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.718924 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.729913 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.748909 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.769061 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.800654 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.816010 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.821216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.821260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.821272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.821293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.821316 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.834548 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.850064 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.876494 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.896767 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.913326 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.924665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.924730 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.924743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.924768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.924788 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:00Z","lastTransitionTime":"2026-01-21T21:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.936187 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.974107 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627aba46-44a7-4724-87bd-7caa8a0a3bf6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:00 crc kubenswrapper[4860]: I0121 21:10:00.995808 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:00Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.017891 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.027996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.028043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.028058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.028081 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.028096 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.131355 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.131415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.131431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.131451 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.131462 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.234616 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.234651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.234660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.234678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.234690 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.235147 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:55:50.050114505 +0000 UTC Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.338351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.338419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.338431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.338459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.338473 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.442115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.442164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.442177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.442195 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.442207 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.544888 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.544954 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.544970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.544988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.545000 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.578744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:01 crc kubenswrapper[4860]: E0121 21:10:01.578927 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.578744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:01 crc kubenswrapper[4860]: E0121 21:10:01.579227 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.616451 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/3.log" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.618040 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/2.log" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.625271 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" exitCode=1 Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.625351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.625472 4860 scope.go:117] "RemoveContainer" containerID="4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.626975 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:01 crc kubenswrapper[4860]: E0121 21:10:01.627334 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.645152 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.648449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.649142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.649160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.649184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.649202 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.661639 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.686093 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf933116460fa240279ffa89dc98c27e79cc94dd2e7199388918a2a7d51d849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:31Z\\\",\\\"message\\\":\\\"\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/check-endpoints_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0121 21:09:31.464286 6573 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:10:01Z\\\",\\\"message\\\":\\\"ubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211737 6933 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:10:01.211964 6933 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 21:10:01.212169 6933 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.212194 6933 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211443 6933 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:10:01.212786 6933 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 21:10:01.212808 6933 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 21:10:01.213361 6933 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:10:01.213487 6933 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 21:10:01.213624 6933 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.701605 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.720086 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.747642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627aba46-44a7-4724-87bd-7caa8a0a3bf6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.752382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.752460 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.752480 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.752511 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.752530 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.762797 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.779557 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.795034 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.809391 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.827288 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.841642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.855975 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.856071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.856099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.856138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.856182 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.857316 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.878766 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.899544 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.918717 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.936080 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.951769 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.958815 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.958867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.958885 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.958914 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.959026 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:01Z","lastTransitionTime":"2026-01-21T21:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:01 crc kubenswrapper[4860]: I0121 21:10:01.969494 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:01Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.061906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.062001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.062017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.062037 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.062049 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.165161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.165202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.165213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.165233 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.165245 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.235985 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:50:07.75445163 +0000 UTC Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.268981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.269054 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.269079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.269112 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.269133 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.372227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.372299 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.372320 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.372352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.372371 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.475867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.475964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.475986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.476017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.476037 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.578202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.578270 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:02 crc kubenswrapper[4860]: E0121 21:10:02.578472 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:02 crc kubenswrapper[4860]: E0121 21:10:02.578667 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.580170 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.580235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.580256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.580288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.580308 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.632477 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/3.log" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.637346 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:02 crc kubenswrapper[4860]: E0121 21:10:02.637524 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.655648 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.673622 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.684629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.684734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.684761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.684804 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.684834 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.690304 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.712273 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.730258 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.749031 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.770870 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.788791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.788832 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.788845 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.788865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.788877 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.791495 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.809600 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.828880 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.843228 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.866642 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:10:01Z\\\",\\\"message\\\":\\\"ubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211737 6933 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:10:01.211964 6933 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 21:10:01.212169 6933 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.212194 6933 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211443 6933 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:10:01.212786 6933 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 21:10:01.212808 6933 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 21:10:01.213361 6933 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:10:01.213487 6933 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 21:10:01.213624 6933 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.881308 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.892258 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.892315 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.892330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.892354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.892368 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.906049 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627aba46-44a7-4724-87bd-7caa8a0a3bf6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.925502 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.962230 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.977173 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.994349 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.995643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.995693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.995703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.995721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:02 crc kubenswrapper[4860]: I0121 21:10:02.995733 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:02Z","lastTransitionTime":"2026-01-21T21:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.013229 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:03Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.098322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.098370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.098382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.098401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.098418 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.201446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.201517 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.201531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.201549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.201561 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.236241 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:35:27.70521542 +0000 UTC Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.304591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.305048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.305157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.305331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.305427 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.408582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.409154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.409343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.409539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.409796 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.513244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.513304 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.513323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.513349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.513368 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.577782 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.577782 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:03 crc kubenswrapper[4860]: E0121 21:10:03.578256 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:03 crc kubenswrapper[4860]: E0121 21:10:03.578333 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.616256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.616314 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.616333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.616356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.616374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.718493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.718541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.718561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.718582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.718598 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.821407 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.821482 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.821501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.821531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.821550 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.923717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.923765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.923778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.923798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:03 crc kubenswrapper[4860]: I0121 21:10:03.923810 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:03Z","lastTransitionTime":"2026-01-21T21:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.025894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.025956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.025965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.025979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.025989 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.129413 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.129474 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.129490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.129541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.129557 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.232280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.232323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.232340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.232366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.232383 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.236771 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:41:43.283971485 +0000 UTC Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.334756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.334807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.334818 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.334835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.334846 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.438410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.438488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.438501 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.438523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.438539 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.482083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.482128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.482141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.482161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.482177 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.495300 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.499293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.499325 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.499335 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.499351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.499361 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.511240 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.514887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.514952 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.514963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.514984 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.514996 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.528493 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.532585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.532623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.532631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.532649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.532663 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.547086 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.551042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.551065 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.551074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.551093 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.551103 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.563099 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.563229 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.565026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.565058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.565070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.565085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.565118 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.577891 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.577891 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.578076 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:04 crc kubenswrapper[4860]: E0121 21:10:04.578125 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.667348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.667383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.667393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.667408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.667417 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.770062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.770098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.770107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.770121 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.770130 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.871776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.871821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.871829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.871843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.871852 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.974402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.974440 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.974452 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.974467 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:04 crc kubenswrapper[4860]: I0121 21:10:04.974478 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:04Z","lastTransitionTime":"2026-01-21T21:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.076777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.076819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.076831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.076844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.076854 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.180118 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.180463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.180565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.180640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.180728 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.236884 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:06:06.602812931 +0000 UTC Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.283394 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.283688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.283768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.283839 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.283911 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.386378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.386431 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.386444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.386464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.386758 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.489683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.489716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.489727 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.489741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.489751 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.578595 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:05 crc kubenswrapper[4860]: E0121 21:10:05.578732 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.578595 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:05 crc kubenswrapper[4860]: E0121 21:10:05.579075 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.593695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.593752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.593769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.593793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.593810 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.696776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.696848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.696866 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.696893 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.696912 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.800118 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.800176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.800193 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.800217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.800235 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.903864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.903921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.903955 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.903981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:05 crc kubenswrapper[4860]: I0121 21:10:05.904000 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:05Z","lastTransitionTime":"2026-01-21T21:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.006116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.006173 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.006184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.006201 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.006213 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.108643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.109053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.109188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.109328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.109458 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.212235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.212287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.212302 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.212323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.212338 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.238585 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:51:42.358103664 +0000 UTC Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.315074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.315138 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.315157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.315185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.315204 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.419356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.419710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.420116 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.420361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.420602 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.524322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.524406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.524426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.524455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.524475 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.577858 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:06 crc kubenswrapper[4860]: E0121 21:10:06.578384 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.578174 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:06 crc kubenswrapper[4860]: E0121 21:10:06.578645 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.627343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.627387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.627400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.627421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.627437 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.730663 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.730734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.730747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.730766 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.730779 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.833249 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.833294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.833307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.833324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.833338 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.936222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.936346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.936379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.936427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:06 crc kubenswrapper[4860]: I0121 21:10:06.936459 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:06Z","lastTransitionTime":"2026-01-21T21:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.039108 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.039162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.039174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.039206 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.039223 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.297381 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:47:21.259057604 +0000 UTC Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.298148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.298182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.298195 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.298212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.298222 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.404078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.404294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.404306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.404583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.404640 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.507459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.507532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.507546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.507563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.507581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.578761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.578785 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:07 crc kubenswrapper[4860]: E0121 21:10:07.578975 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:07 crc kubenswrapper[4860]: E0121 21:10:07.579057 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.610392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.610437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.610445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.610460 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.610470 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.713064 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.713117 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.713127 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.713143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.713153 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.816156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.816231 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.816246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.816267 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.816280 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.919021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.919068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.919082 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.919147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:07 crc kubenswrapper[4860]: I0121 21:10:07.919160 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:07Z","lastTransitionTime":"2026-01-21T21:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.022089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.022160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.022172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.022188 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.022201 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.124527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.124559 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.124589 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.124603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.124611 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.227022 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.227073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.227087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.227135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.227151 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.298532 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:54:52.54963196 +0000 UTC Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.330154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.330203 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.330215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.330232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.330245 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.433651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.433690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.433699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.433714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.433722 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.536460 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.536510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.536522 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.536543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.536559 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.577891 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.577891 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:08 crc kubenswrapper[4860]: E0121 21:10:08.578103 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:08 crc kubenswrapper[4860]: E0121 21:10:08.578131 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.601614 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627aba46-44a7-4724-87bd-7caa8a0a3bf6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.616274 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.631295 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.638801 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.638848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.638881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.638902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.638914 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.754482 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.756417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.756449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.756460 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.756476 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.756487 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.767570 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.781761 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.800200 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.812231 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.821368 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.833394 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.845229 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.861198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.861229 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.861239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.861253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.861263 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.863326 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.882053 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.898184 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.912046 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.930373 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.945039 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.963798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.963867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.963880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.963901 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.963914 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:08Z","lastTransitionTime":"2026-01-21T21:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.974781 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:10:01Z\\\",\\\"message\\\":\\\"ubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211737 6933 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:10:01.211964 6933 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 21:10:01.212169 6933 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.212194 6933 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211443 6933 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:10:01.212786 6933 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 21:10:01.212808 6933 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 21:10:01.213361 6933 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:10:01.213487 6933 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 21:10:01.213624 6933 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:08 crc kubenswrapper[4860]: I0121 21:10:08.987992 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:08Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.066401 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.066441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.066453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.066468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.066478 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.168521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.168561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.168573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.168588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.168629 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.271255 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.271294 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.271306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.271321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.271332 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.299666 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:06:47.4664275 +0000 UTC Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.373911 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.373986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.374021 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.374039 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.374051 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.477072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.477132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.477151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.477176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.477193 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.578166 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.578125 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:09 crc kubenswrapper[4860]: E0121 21:10:09.578341 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:09 crc kubenswrapper[4860]: E0121 21:10:09.578590 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.579370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.579402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.579412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.579426 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.579439 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.681948 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.682053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.682068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.682088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.682106 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.784855 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.784907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.784920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.784957 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.784970 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.888408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.888453 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.888463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.888478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.888487 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.991374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.991417 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.991427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.991442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:09 crc kubenswrapper[4860]: I0121 21:10:09.991452 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:09Z","lastTransitionTime":"2026-01-21T21:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.094497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.094539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.094551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.094569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.094582 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.197012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.197056 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.197068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.197083 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.197092 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.299850 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:58:37.186021724 +0000 UTC Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.300597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.300641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.300653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.300670 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.300683 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.403830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.403876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.403888 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.403905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.403918 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.506114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.506155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.506164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.506177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.506187 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.578121 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.578121 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:10 crc kubenswrapper[4860]: E0121 21:10:10.578486 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:10 crc kubenswrapper[4860]: E0121 21:10:10.578780 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.608609 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.608671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.608683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.608700 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.608715 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.711919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.712002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.712012 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.712029 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.712040 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.814853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.814905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.814972 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.815001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.815028 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.918026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.918086 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.918102 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.918124 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:10 crc kubenswrapper[4860]: I0121 21:10:10.918138 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:10Z","lastTransitionTime":"2026-01-21T21:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.020536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.020600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.020608 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.020629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.020640 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.125288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.125338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.125354 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.125378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.125395 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.228549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.228633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.228672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.228719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.228766 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.300926 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:50:43.804808018 +0000 UTC Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.332457 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.332537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.332561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.332592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.332616 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.436558 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.437046 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.437227 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.437387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.437515 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.541269 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.541337 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.541351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.541374 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.541394 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.577794 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.577862 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:11 crc kubenswrapper[4860]: E0121 21:10:11.577924 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:11 crc kubenswrapper[4860]: E0121 21:10:11.578014 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.644375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.644823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.645023 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.645230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.645382 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.748400 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.748776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.748886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.749043 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.749174 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.851747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.852175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.852341 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.852449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.852576 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.955161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.955475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.955564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.955672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:11 crc kubenswrapper[4860]: I0121 21:10:11.955799 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:11Z","lastTransitionTime":"2026-01-21T21:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.058899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.058961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.058971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.058988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.058999 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.160986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.161024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.161034 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.161050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.161059 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.263913 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.263981 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.264001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.264024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.264042 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.301573 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:08:41.24069428 +0000 UTC Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.366540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.366574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.366583 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.366598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.366608 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.468962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.469015 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.469033 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.469050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.469062 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.571498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.571538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.571546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.571561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.571572 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.578876 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.578876 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:12 crc kubenswrapper[4860]: E0121 21:10:12.579235 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:12 crc kubenswrapper[4860]: E0121 21:10:12.579095 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.674092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.674143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.674155 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.674177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.674189 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.776152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.776200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.776213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.776232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.776244 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.879607 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.879657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.879671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.879691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.879704 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.982172 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.982230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.982241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.982256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:12 crc kubenswrapper[4860]: I0121 21:10:12.982267 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:12Z","lastTransitionTime":"2026-01-21T21:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.085234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.085277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.085287 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.085305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.085314 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.187657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.187694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.187703 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.187717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.187727 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.290771 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.290843 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.290860 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.290881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.290894 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.302111 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:59:30.33762552 +0000 UTC Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.393792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.393838 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.393854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.393872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.393882 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.496719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.496759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.496768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.496788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.496798 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.578824 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.578908 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:13 crc kubenswrapper[4860]: E0121 21:10:13.579046 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:13 crc kubenswrapper[4860]: E0121 21:10:13.579589 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.579861 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:13 crc kubenswrapper[4860]: E0121 21:10:13.580033 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.598888 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.598985 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.599001 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.599024 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.599042 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.701377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.701418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.701429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.701443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.701453 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.804690 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.804763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.804775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.804794 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.804807 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.907307 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.907343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.907352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.907367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:13 crc kubenswrapper[4860]: I0121 21:10:13.907376 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:13Z","lastTransitionTime":"2026-01-21T21:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.010002 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.010058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.010077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.010100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.010116 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.112676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.112720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.112729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.112747 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.112757 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.209085 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.209291 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.209406 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs podName:60ae05da-3403-4a2f-92f4-2ffa574a65a8 nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.20937776 +0000 UTC m=+170.431556250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs") pod "network-metrics-daemon-rrwcr" (UID: "60ae05da-3403-4a2f-92f4-2ffa574a65a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.215825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.215859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.215870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.215887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.215896 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.302490 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:56:01.12859527 +0000 UTC Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.318640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.318857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.318962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.319146 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.319264 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.421778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.421818 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.421831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.421852 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.421868 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.524661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.524726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.524748 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.524777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.524819 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.578150 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.578325 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.578597 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.579083 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.627886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.627922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.627950 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.627974 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.627986 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.730781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.730813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.730822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.730835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.730845 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.833752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.833806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.833822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.833848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.833864 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.871388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.871439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.871448 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.871469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.871486 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.885443 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.897585 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.897672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.897693 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.897720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.897745 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.934540 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.941743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.941804 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.941825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.941849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.941867 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.962548 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.968784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.968817 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.968828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.968842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:14 crc kubenswrapper[4860]: I0121 21:10:14.968852 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:14Z","lastTransitionTime":"2026-01-21T21:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:14 crc kubenswrapper[4860]: E0121 21:10:14.996447 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.001376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.001405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.001415 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.001430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.001439 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: E0121 21:10:15.016526 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:15 crc kubenswrapper[4860]: E0121 21:10:15.016643 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.018468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.018499 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.018508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.018523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.018532 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.121154 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.121191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.121200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.121218 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.121228 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.224070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.224136 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.224150 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.224168 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.224181 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.303647 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:36:09.42799664 +0000 UTC Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.327478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.327528 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.327539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.327555 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.327564 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.431340 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.431404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.431419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.431437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.431447 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.535293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.535346 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.535361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.535380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.535392 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.578873 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.579073 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:15 crc kubenswrapper[4860]: E0121 21:10:15.579283 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:15 crc kubenswrapper[4860]: E0121 21:10:15.579102 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.639221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.639641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.639653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.639673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.639686 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.742717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.742746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.742754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.742768 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.742776 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.845351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.845388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.845397 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.845412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.845422 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.948600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.948673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.948683 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.948697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:15 crc kubenswrapper[4860]: I0121 21:10:15.948707 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:15Z","lastTransitionTime":"2026-01-21T21:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.052342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.052404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.052422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.052442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.052454 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.155524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.155569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.155581 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.155598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.155610 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.259081 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.259134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.259144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.259164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.259176 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.304299 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:34:50.28685952 +0000 UTC Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.361995 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.362044 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.362058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.362141 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.362156 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.465147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.465564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.465696 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.465824 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.465979 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.568907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.569020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.569040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.569071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.569091 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.578208 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:16 crc kubenswrapper[4860]: E0121 21:10:16.578448 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.578493 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:16 crc kubenswrapper[4860]: E0121 21:10:16.578625 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.673171 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.673230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.673247 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.673270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.673316 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.776698 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.776744 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.776754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.776773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.776784 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.880601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.880661 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.880673 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.880697 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.880717 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.984363 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.984412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.984422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.984439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:16 crc kubenswrapper[4860]: I0121 21:10:16.984449 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:16Z","lastTransitionTime":"2026-01-21T21:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.087906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.088075 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.088100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.088133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.088154 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.191527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.191569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.191581 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.191598 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.191611 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.294625 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.294677 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.294695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.294720 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.294735 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.304874 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:36:39.175395102 +0000 UTC Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.397486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.397539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.397554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.397575 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.397590 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.501175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.501237 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.501259 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.501289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.501308 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.578155 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.578183 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:17 crc kubenswrapper[4860]: E0121 21:10:17.578334 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:17 crc kubenswrapper[4860]: E0121 21:10:17.578458 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.604582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.604626 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.604638 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.604657 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.604673 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.708488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.708540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.708553 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.708573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.708586 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.811996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.812041 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.812053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.812068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.812078 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.914826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.914906 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.914921 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.914962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:17 crc kubenswrapper[4860]: I0121 21:10:17.914981 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:17Z","lastTransitionTime":"2026-01-21T21:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.018510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.018554 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.018565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.018578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.018590 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.123820 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.123918 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.123963 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.123986 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.124003 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.227409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.227534 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.227548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.227567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.227581 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.305214 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:23:39.610380082 +0000 UTC Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.331311 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.331370 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.331381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.331427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.331442 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.435330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.435377 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.435387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.435405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.435417 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.538112 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.538181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.538200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.538219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.538231 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.578835 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:18 crc kubenswrapper[4860]: E0121 21:10:18.579084 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.579416 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:18 crc kubenswrapper[4860]: E0121 21:10:18.580079 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.602506 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s67xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:09:44Z\\\",\\\"message\\\":\\\"2026-01-21T21:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b\\\\n2026-01-21T21:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2afee325-e84b-4d98-8d9e-a05b146cc02b to /host/opt/cni/bin/\\\\n2026-01-21T21:08:59Z [verbose] multus-daemon started\\\\n2026-01-21T21:08:59Z [verbose] Readiness Indicator file check\\\\n2026-01-21T21:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s67xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.622118 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebb59cca-ede6-44c6-850b-28d109e50dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4408cd518397b902b64d876134ad24ab1fa66870623c88a781ee491edafc10d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qb8lx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w47lx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.641248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.641382 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.641419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.641455 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.641479 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.643682 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77hw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfdb3d59f14a37e9fbb7a566be030e83fc5a9f41cf56c1b7b612ee2621f78dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9c896bd360433d259228a03105af7edf3e2d007c53e1cbe43fdd03f82f25b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9007f985b3c0184fdd9304d8e56e1f2273ff2100a9e1c38db3445ec37d8e8382\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42716518a265e55e94ed050822347e75d9ff9b9bcb69cd9e12fe544ccf29c5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcaaa4b088ef81e367f9a8f5307d0a2de17846e49efe92d854b7b4769b0e1722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a186c039d0c14244084f4086bdaf5495bf7c2bde99f0e333fdbbd51cf9da4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04932db7e6b39b2003e8a604d985db694237d5d4437e07e009e7603606af4073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29tmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77hw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.672770 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627aba46-44a7-4724-87bd-7caa8a0a3bf6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f47067b55815a00aa28905b98d7a65531fcc94bd78506cfb8c4a122b1bd899d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a50b05dbf2209e0f071b99161d6a8309d5e7e78c6238f58dea5972ced5d205d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b972c6fcdcb7e2386982d0a02992820af357c7068ee93d1b0ffd917c50d68cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f060e1aa14d25d13a870316cece62ff1fe474e5752195ff9e093c8f760531e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65489e86fb91369aadad4567cfa45918c2c8f6ff2cd7ae22e2e857e3c2721f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc35b84d98b14ed9513576abca4eab3711f3958852819cad13ae840ea49b8039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e112d178e377429b9a70854c75d0551a58cc207b621521c84b55b09115d85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8d829b70abf71a738026b7913bce65df7dcf39789358904055b21e86fa204f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.688112 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.704445 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c98166fd6bfb202e1b7e3aade86c431f8cd266898eced5fce91a2703c4aa47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.721374 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e5e6715-eead-4da4-b376-f7d87b89e7b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 21:08:44.347026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 21:08:44.348818 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3521170295/tls.crt::/tmp/serving-cert-3521170295/tls.key\\\\\\\"\\\\nI0121 21:08:50.430804 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 21:08:50.531223 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 21:08:50.531270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 21:08:50.534384 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 21:08:50.534405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 21:08:50.568249 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 21:08:50.568310 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568317 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 21:08:50.568322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 21:08:50.568326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 21:08:50.568328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 21:08:50.568332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 21:08:50.568271 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 21:08:50.572231 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.734524 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7312975-0b19-4971-9497-9451b87225ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76da08206432ecce04f20f6f8d984d7725497bdf88826c38d469d02e4deb005d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6105c047642dac5c3eb68118f57ffd22bfe7ab32c87479a20a30e7d9f59bc0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d7221ae260536a1522e6a411773e00220ac2efb123f79293c3ae47324309006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f296735066cb17c4a07f82676986de926e57fda640ebcfe20cfc9e0128ac2d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.745139 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.745191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.745204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.745224 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.745239 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.749607 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6n8b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d522d6-a954-4073-86aa-4c869d61585f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b88f3cf106870aaed812dc0661908f3b53bd45bf979c6d6e226070e9f8e82a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qw7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6n8b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.775354 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.792208 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00000b45d1f107e14cc53a3059a9ca042eac70b2589764c0f6f5854353df4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47dffb41ec07173b0f3a3157bbbb324f3ad121d3a9ed9bd7eb94aaef49fb575e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.807609 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb31d86f-995f-4262-bd5f-0487bd341607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b97d127373980d155dcf2dcd958f463f1c8361e6ff36c3e4f259dff032a7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c98e12277db4cf54c69f202f29ad8b7817c635d828e6be36cf71792d6a3422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kslzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.828085 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077fc74a-aa34-4002-834b-d3bd4b9e79c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79b60f4f0a0cef177950815ed7daba9eb0e0b222465f4d4d89b3561ea4c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21826f87a617878d6d43bbb1e1093c86799715a5183a352fc9c885014f40b25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b05b6063a02a2d5dd6ffe84669c75140c3de3eedbe47c84d3c27a87abfdb135a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.846305 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.850978 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.851026 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.851040 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.851095 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.851114 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.863264 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c94d8e2ae7cbffb475869d0e3c284fc914894a8dc009cf313f3bb1fa2cc6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.876517 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ae05da-3403-4a2f-92f4-2ffa574a65a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rrwcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.891729 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5d7193-f8b2-4564-a461-75ad8c9febcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2b68332811aeb46cfec71d7c7809aa12d356779e431bb5e68f4306b2147cec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5a51a220761dafd0a040046fabb9f85bc60020f49e32cb34cf30201fae7f636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.906359 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccxw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95f1feb1-156a-4494-a3c9-30581a4bf19a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a60c15471d97ce6d281da60b5a2c28403c2fca9781c3d763c6075bad767e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgr8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccxw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.932453 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T21:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T21:10:01Z\\\",\\\"message\\\":\\\"ubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211737 6933 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 21:10:01.211964 6933 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 21:10:01.212169 6933 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.212194 6933 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 21:10:01.211443 6933 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 21:10:01.212786 6933 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 21:10:01.212808 6933 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 21:10:01.213361 6933 ovnkube.go:599] Stopped ovnkube\\\\nI0121 21:10:01.213487 6933 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 21:10:01.213624 6933 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T21:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T21:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T21:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T21:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tb7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T21:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:18Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.953546 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.953968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.954065 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.954182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:18 crc kubenswrapper[4860]: I0121 21:10:18.954314 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:18Z","lastTransitionTime":"2026-01-21T21:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.057261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.057702 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.058070 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.058262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.058463 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.161122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.161388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.161560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.161717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.161821 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.264841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.264878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.264887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.264902 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.264915 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.305409 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:29:09.170989662 +0000 UTC Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.369215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.369301 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.369322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.369358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.369380 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.473471 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.473521 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.473531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.473549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.473560 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579444 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579732 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.579741 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: E0121 21:10:19.579924 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:19 crc kubenswrapper[4860]: E0121 21:10:19.579913 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.682769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.682812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.682821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.682837 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.682873 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.785565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.785637 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.785652 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.785671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.785686 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.889437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.889565 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.889595 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.889641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.889669 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.993213 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.993291 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.993309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.993360 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:19 crc kubenswrapper[4860]: I0121 21:10:19.993380 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:19Z","lastTransitionTime":"2026-01-21T21:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.096444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.096502 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.096513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.096548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.096565 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.200367 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.200425 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.200441 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.200479 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.200494 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.303387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.303472 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.303491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.303516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.303531 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.306511 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:52:54.830546293 +0000 UTC Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.406746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.406826 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.406840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.406870 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.406886 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.510631 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.510695 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.510717 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.510751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.510767 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.578386 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.578679 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:20 crc kubenswrapper[4860]: E0121 21:10:20.578682 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:20 crc kubenswrapper[4860]: E0121 21:10:20.579066 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.614266 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.614322 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.614333 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.614359 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.614375 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.717230 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.717313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.717328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.717355 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.717389 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.821049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.821123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.821152 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.821181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.821200 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.924239 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.924457 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.924476 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.924493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:20 crc kubenswrapper[4860]: I0121 21:10:20.924518 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:20Z","lastTransitionTime":"2026-01-21T21:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.027232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.027291 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.027303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.027324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.027338 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.131293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.131786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.131806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.131844 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.131860 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.235990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.236042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.236058 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.236085 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.236105 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.307223 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:17:11.889329462 +0000 UTC Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.339610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.339664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.339676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.339698 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.339712 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.443849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.443919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.443970 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.444020 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.444061 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.548699 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.548776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.548788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.548816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.548835 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.578059 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.578057 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:21 crc kubenswrapper[4860]: E0121 21:10:21.578292 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:21 crc kubenswrapper[4860]: E0121 21:10:21.578416 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.652042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.652092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.652109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.652132 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.652145 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.755742 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.755797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.755810 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.755828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.755837 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.859304 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.859366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.859387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.859430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.859451 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.963084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.963131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.963144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.963168 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:21 crc kubenswrapper[4860]: I0121 21:10:21.963183 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:21Z","lastTransitionTime":"2026-01-21T21:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.066328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.066380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.066392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.066409 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.066419 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.170107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.170159 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.170169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.170221 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.170233 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.273881 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.273924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.273951 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.273968 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.273977 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.307776 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:11:17.658106476 +0000 UTC Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.378378 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.378464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.378481 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.378514 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.378533 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.481859 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.481910 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.481920 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.481955 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.481966 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.578293 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.578487 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:22 crc kubenswrapper[4860]: E0121 21:10:22.578552 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:22 crc kubenswrapper[4860]: E0121 21:10:22.578752 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.584707 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.584770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.584779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.584803 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.584816 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.687518 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.687568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.687578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.687597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.687608 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.790418 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.790498 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.790516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.790547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.790570 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.894410 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.894472 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.894485 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.894513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.894529 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.997364 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.997497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.997516 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.997547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:22 crc kubenswrapper[4860]: I0121 21:10:22.997571 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:22Z","lastTransitionTime":"2026-01-21T21:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.102459 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.102525 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.102535 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.102556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.102572 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.205869 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.205922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.205961 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.205983 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.206032 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.308524 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:13:00.232837413 +0000 UTC Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.309537 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.309653 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.309678 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.309708 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.309726 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.412234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.412280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.412288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.412304 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.412316 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.514133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.514185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.514199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.514222 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.514235 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.577864 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.577924 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:23 crc kubenswrapper[4860]: E0121 21:10:23.578052 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:23 crc kubenswrapper[4860]: E0121 21:10:23.578257 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.617507 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.617624 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.617648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.617688 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.617715 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.720803 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.720857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.720867 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.720884 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.720897 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.823419 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.823538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.823560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.823592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.823617 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.926761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.926813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.926823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.926840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:23 crc kubenswrapper[4860]: I0121 21:10:23.926852 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:23Z","lastTransitionTime":"2026-01-21T21:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.030110 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.030164 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.030174 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.030189 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.030199 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.134119 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.134191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.134210 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.134237 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.134256 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.238393 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.238496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.238514 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.238542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.238561 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.309281 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:15:05.921719335 +0000 UTC Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.342277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.342332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.342349 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.342376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.342396 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.445324 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.445366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.445375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.445390 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.445399 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.548791 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.548876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.548900 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.548927 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.548984 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.578457 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.578451 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:24 crc kubenswrapper[4860]: E0121 21:10:24.578621 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:24 crc kubenswrapper[4860]: E0121 21:10:24.578794 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.651423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.651466 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.651477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.651493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.651503 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.753889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.753971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.753988 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.754009 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.754024 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.857191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.857241 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.857253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.857269 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.857279 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.960215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.960272 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.960288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.960309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:24 crc kubenswrapper[4860]: I0121 21:10:24.960320 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:24Z","lastTransitionTime":"2026-01-21T21:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.063163 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.063214 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.063226 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.063244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.063256 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.126831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.126878 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.126890 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.126909 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.126922 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.142080 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:25Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.147477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.147509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.147519 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.147536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.147547 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.161191 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:25Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.164987 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.165017 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.165030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.165049 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.165061 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.177354 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:25Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.196112 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.196156 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.196169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.196191 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.196202 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.212678 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:25Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.217641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.217674 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.217694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.217713 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.217728 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.233969 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T21:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148647ae-8206-4b09-9045-f550cec0b288\\\",\\\"systemUUID\\\":\\\"5b1ad41e-3342-4aef-8a8f-31edafe270ff\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T21:10:25Z is after 2025-08-24T17:21:41Z" Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.234092 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.235839 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.235889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.235899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.235919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.235950 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.310364 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:45:04.951757677 +0000 UTC Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.338962 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.339004 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.339013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.339030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.339043 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.442099 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.442144 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.442153 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.442169 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.442180 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.544745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.544806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.544819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.544855 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.544867 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.578769 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.578769 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.579079 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:25 crc kubenswrapper[4860]: E0121 21:10:25.579197 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.647851 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.647899 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.647909 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.647925 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.647954 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.751264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.751302 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.751313 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.751330 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.751343 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.854236 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.854288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.854300 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.854318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.854331 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.956834 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.956865 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.956873 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.956886 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:25 crc kubenswrapper[4860]: I0121 21:10:25.956896 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:25Z","lastTransitionTime":"2026-01-21T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.062782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.062833 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.062848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.062871 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.062888 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.166030 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.166080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.166100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.166123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.166139 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.268880 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.268996 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.269014 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.269072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.269094 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.310908 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:38:01.312942246 +0000 UTC Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.372682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.372740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.372753 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.372773 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.372785 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.476350 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.476408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.476420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.476439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.476453 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.578118 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.578211 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:26 crc kubenswrapper[4860]: E0121 21:10:26.578258 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:26 crc kubenswrapper[4860]: E0121 21:10:26.578362 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579074 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579213 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579277 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.579293 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: E0121 21:10:26.579531 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.682660 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.682709 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.682721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.682740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.682752 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.785682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.785723 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.785733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.785748 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.785758 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.888545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.888619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.888638 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.888666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.888684 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.990726 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.990777 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.990790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.990809 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:26 crc kubenswrapper[4860]: I0121 21:10:26.990821 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:26Z","lastTransitionTime":"2026-01-21T21:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.093841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.093926 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.093967 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.093990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.094005 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.196062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.196101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.196109 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.196127 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.196144 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.298278 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.298318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.298328 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.298343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.298354 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.311854 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:29:30.256325581 +0000 UTC Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.401427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.401483 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.401495 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.401513 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.401526 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.504889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.504952 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.504964 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.504982 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.504995 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.578119 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.578119 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:27 crc kubenswrapper[4860]: E0121 21:10:27.578321 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:27 crc kubenswrapper[4860]: E0121 21:10:27.578354 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.607545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.607593 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.607603 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.607623 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.607634 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.710282 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.710331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.710343 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.710361 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.710374 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.813334 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.813398 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.813406 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.813422 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.813431 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.916524 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.916563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.916574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.916588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:27 crc kubenswrapper[4860]: I0121 21:10:27.916597 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:27Z","lastTransitionTime":"2026-01-21T21:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.019013 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.019053 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.019063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.019078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.019089 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:28Z","lastTransitionTime":"2026-01-21T21:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.122889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.123008 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.123038 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.123072 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.123098 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:28Z","lastTransitionTime":"2026-01-21T21:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.226606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.226646 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.226656 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.226671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.226681 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:28Z","lastTransitionTime":"2026-01-21T21:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.312653 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:53:38.937271759 +0000 UTC Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.350645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.350767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.350792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.350823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.350850 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:28Z","lastTransitionTime":"2026-01-21T21:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.453387 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.453439 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.453454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.453471 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.453483 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:28Z","lastTransitionTime":"2026-01-21T21:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:28 crc kubenswrapper[4860]: E0121 21:10:28.553968 4860 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.578579 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.578669 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:28 crc kubenswrapper[4860]: E0121 21:10:28.578703 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:28 crc kubenswrapper[4860]: E0121 21:10:28.578875 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.639842 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.639783983 podStartE2EDuration="1m36.639783983s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.612167153 +0000 UTC m=+120.834345663" watchObservedRunningTime="2026-01-21 21:10:28.639783983 +0000 UTC m=+120.861962473" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.640210 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=67.640200576 podStartE2EDuration="1m7.640200576s" podCreationTimestamp="2026-01-21 21:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.638828234 +0000 UTC m=+120.861006714" watchObservedRunningTime="2026-01-21 21:10:28.640200576 +0000 UTC m=+120.862379066" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.653955 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6n8b5" podStartSLOduration=97.653909948 podStartE2EDuration="1m37.653909948s" podCreationTimestamp="2026-01-21 21:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.653781344 +0000 UTC m=+120.875959824" watchObservedRunningTime="2026-01-21 21:10:28.653909948 +0000 UTC m=+120.876088438" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.693528 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=96.693507379 podStartE2EDuration="1m36.693507379s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.675215908 +0000 UTC m=+120.897394378" watchObservedRunningTime="2026-01-21 21:10:28.693507379 +0000 UTC m=+120.915685859" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.764317 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=42.764291908 podStartE2EDuration="42.764291908s" podCreationTimestamp="2026-01-21 21:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.763654619 +0000 UTC m=+120.985833089" watchObservedRunningTime="2026-01-21 21:10:28.764291908 +0000 UTC m=+120.986470378" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.764795 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4c4b" podStartSLOduration=95.764789572 podStartE2EDuration="1m35.764789572s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.75271079 +0000 UTC m=+120.974889280" watchObservedRunningTime="2026-01-21 21:10:28.764789572 +0000 UTC m=+120.986968032" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.777086 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ccxw8" podStartSLOduration=97.777064342 podStartE2EDuration="1m37.777064342s" podCreationTimestamp="2026-01-21 21:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.776731402 +0000 UTC m=+120.998909872" watchObservedRunningTime="2026-01-21 21:10:28.777064342 +0000 UTC m=+120.999242822" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.882248 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.882226105 podStartE2EDuration="28.882226105s" podCreationTimestamp="2026-01-21 21:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.858734088 +0000 UTC m=+121.080912568" watchObservedRunningTime="2026-01-21 21:10:28.882226105 +0000 UTC m=+121.104404575" Jan 21 21:10:28 crc kubenswrapper[4860]: E0121 21:10:28.910505 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.934587 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-s67xh" podStartSLOduration=96.934566979 podStartE2EDuration="1m36.934566979s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.934022282 +0000 UTC m=+121.156200772" watchObservedRunningTime="2026-01-21 21:10:28.934566979 +0000 UTC m=+121.156745449" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.951085 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podStartSLOduration=96.951057415 podStartE2EDuration="1m36.951057415s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.94957958 +0000 UTC m=+121.171758050" watchObservedRunningTime="2026-01-21 21:10:28.951057415 +0000 UTC m=+121.173235885" Jan 21 21:10:28 crc kubenswrapper[4860]: I0121 21:10:28.971284 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-77hw7" podStartSLOduration=96.971263083 podStartE2EDuration="1m36.971263083s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:28.970602623 +0000 UTC m=+121.192781113" watchObservedRunningTime="2026-01-21 21:10:28.971263083 +0000 UTC m=+121.193441543" Jan 21 21:10:29 crc kubenswrapper[4860]: I0121 21:10:29.313346 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:02:04.721962225 +0000 UTC Jan 21 21:10:29 crc kubenswrapper[4860]: I0121 21:10:29.577731 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:29 crc kubenswrapper[4860]: I0121 21:10:29.577761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:29 crc kubenswrapper[4860]: E0121 21:10:29.577889 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:29 crc kubenswrapper[4860]: E0121 21:10:29.578082 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.314472 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:29:22.530288379 +0000 UTC Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.577756 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.577839 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:30 crc kubenswrapper[4860]: E0121 21:10:30.577909 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:30 crc kubenswrapper[4860]: E0121 21:10:30.578396 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.841984 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/1.log" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.842577 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/0.log" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.842648 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2a7ca69-9cb5-41b5-9213-72165a9fc8e1" containerID="ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be" exitCode=1 Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.842699 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerDied","Data":"ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be"} Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.842747 4860 scope.go:117] "RemoveContainer" containerID="0f95f6aeb04409dbf00e98e6a0c10fbef6034f3b0cc0a838b043c1e773a85168" Jan 21 21:10:30 crc kubenswrapper[4860]: I0121 21:10:30.843636 4860 scope.go:117] "RemoveContainer" containerID="ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be" Jan 21 21:10:30 crc kubenswrapper[4860]: E0121 21:10:30.844196 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-s67xh_openshift-multus(e2a7ca69-9cb5-41b5-9213-72165a9fc8e1)\"" pod="openshift-multus/multus-s67xh" podUID="e2a7ca69-9cb5-41b5-9213-72165a9fc8e1" Jan 21 21:10:31 crc kubenswrapper[4860]: I0121 21:10:31.315003 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:40:12.786473566 +0000 UTC Jan 21 21:10:31 crc kubenswrapper[4860]: I0121 21:10:31.578101 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:31 crc kubenswrapper[4860]: I0121 21:10:31.578168 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:31 crc kubenswrapper[4860]: E0121 21:10:31.578348 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:31 crc kubenswrapper[4860]: E0121 21:10:31.578472 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:31 crc kubenswrapper[4860]: I0121 21:10:31.847738 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/1.log" Jan 21 21:10:32 crc kubenswrapper[4860]: I0121 21:10:32.315650 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:34:56.20226869 +0000 UTC Jan 21 21:10:32 crc kubenswrapper[4860]: I0121 21:10:32.578142 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:32 crc kubenswrapper[4860]: I0121 21:10:32.578225 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:32 crc kubenswrapper[4860]: E0121 21:10:32.578281 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:32 crc kubenswrapper[4860]: E0121 21:10:32.578378 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:33 crc kubenswrapper[4860]: I0121 21:10:33.316391 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:24:14.822162341 +0000 UTC Jan 21 21:10:33 crc kubenswrapper[4860]: I0121 21:10:33.578754 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:33 crc kubenswrapper[4860]: I0121 21:10:33.578777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:33 crc kubenswrapper[4860]: E0121 21:10:33.578893 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:33 crc kubenswrapper[4860]: E0121 21:10:33.578965 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:33 crc kubenswrapper[4860]: E0121 21:10:33.911609 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:34 crc kubenswrapper[4860]: I0121 21:10:34.317462 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:11:30.891144944 +0000 UTC Jan 21 21:10:34 crc kubenswrapper[4860]: I0121 21:10:34.578610 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:34 crc kubenswrapper[4860]: I0121 21:10:34.578711 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:34 crc kubenswrapper[4860]: E0121 21:10:34.578756 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:34 crc kubenswrapper[4860]: E0121 21:10:34.578868 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.317799 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:41:31.376047625 +0000 UTC Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.457979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.458211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.458219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.458235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.458247 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T21:10:35Z","lastTransitionTime":"2026-01-21T21:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.520072 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f"] Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.521061 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.524236 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.524878 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.524908 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.525465 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.578853 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:35 crc kubenswrapper[4860]: E0121 21:10:35.579019 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.579099 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:35 crc kubenswrapper[4860]: E0121 21:10:35.579333 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.681104 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fa2116-bf41-4a78-bcc1-fc83e9355970-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.681164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fa2116-bf41-4a78-bcc1-fc83e9355970-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.681193 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26fa2116-bf41-4a78-bcc1-fc83e9355970-service-ca\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.681216 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.681249 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.782749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fa2116-bf41-4a78-bcc1-fc83e9355970-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.782803 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fa2116-bf41-4a78-bcc1-fc83e9355970-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.782842 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26fa2116-bf41-4a78-bcc1-fc83e9355970-service-ca\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.783365 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.782876 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.783589 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.783678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26fa2116-bf41-4a78-bcc1-fc83e9355970-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.784158 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26fa2116-bf41-4a78-bcc1-fc83e9355970-service-ca\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.789986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26fa2116-bf41-4a78-bcc1-fc83e9355970-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.805888 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26fa2116-bf41-4a78-bcc1-fc83e9355970-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-85x7f\" (UID: \"26fa2116-bf41-4a78-bcc1-fc83e9355970\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.836736 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" Jan 21 21:10:35 crc kubenswrapper[4860]: I0121 21:10:35.862264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" event={"ID":"26fa2116-bf41-4a78-bcc1-fc83e9355970","Type":"ContainerStarted","Data":"a2b375a99d0be4159b987a84d75ed5fce4ae9f2eee7737c32b2c710f4938179e"} Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.318763 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:21:00.899645715 +0000 UTC Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.318837 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.328791 4860 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.578148 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.578220 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:36 crc kubenswrapper[4860]: E0121 21:10:36.578315 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:36 crc kubenswrapper[4860]: E0121 21:10:36.578402 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.867133 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" event={"ID":"26fa2116-bf41-4a78-bcc1-fc83e9355970","Type":"ContainerStarted","Data":"a47a93a4482c3aa909edf0a4a12f66f24e5d86b487f80793456483ecd252264b"} Jan 21 21:10:36 crc kubenswrapper[4860]: I0121 21:10:36.885063 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-85x7f" podStartSLOduration=104.885038861 podStartE2EDuration="1m44.885038861s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:36.884875656 +0000 UTC m=+129.107054146" watchObservedRunningTime="2026-01-21 21:10:36.885038861 +0000 UTC m=+129.107217341" Jan 21 21:10:37 crc kubenswrapper[4860]: I0121 21:10:37.578161 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:37 crc kubenswrapper[4860]: E0121 21:10:37.578341 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:37 crc kubenswrapper[4860]: I0121 21:10:37.578161 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:37 crc kubenswrapper[4860]: E0121 21:10:37.578589 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:38 crc kubenswrapper[4860]: I0121 21:10:38.578700 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:38 crc kubenswrapper[4860]: I0121 21:10:38.578822 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:38 crc kubenswrapper[4860]: E0121 21:10:38.580288 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:38 crc kubenswrapper[4860]: E0121 21:10:38.580594 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:38 crc kubenswrapper[4860]: E0121 21:10:38.914777 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:39 crc kubenswrapper[4860]: I0121 21:10:39.577985 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:39 crc kubenswrapper[4860]: I0121 21:10:39.578090 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:39 crc kubenswrapper[4860]: E0121 21:10:39.578154 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:39 crc kubenswrapper[4860]: E0121 21:10:39.578330 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:40 crc kubenswrapper[4860]: I0121 21:10:40.578641 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:40 crc kubenswrapper[4860]: I0121 21:10:40.578692 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:40 crc kubenswrapper[4860]: E0121 21:10:40.579070 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:40 crc kubenswrapper[4860]: E0121 21:10:40.579260 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:40 crc kubenswrapper[4860]: I0121 21:10:40.581776 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:40 crc kubenswrapper[4860]: E0121 21:10:40.582250 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzw2c_openshift-ovn-kubernetes(7976b0a1-a5f6-4aa6-86db-173e6342ff7f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" Jan 21 21:10:41 crc kubenswrapper[4860]: I0121 21:10:41.578335 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:41 crc kubenswrapper[4860]: I0121 21:10:41.578415 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:41 crc kubenswrapper[4860]: E0121 21:10:41.579032 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:41 crc kubenswrapper[4860]: E0121 21:10:41.579481 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:42 crc kubenswrapper[4860]: I0121 21:10:42.578507 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:42 crc kubenswrapper[4860]: I0121 21:10:42.578686 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:42 crc kubenswrapper[4860]: E0121 21:10:42.578798 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:42 crc kubenswrapper[4860]: I0121 21:10:42.579427 4860 scope.go:117] "RemoveContainer" containerID="ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be" Jan 21 21:10:42 crc kubenswrapper[4860]: E0121 21:10:42.579604 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:42 crc kubenswrapper[4860]: I0121 21:10:42.902116 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/1.log" Jan 21 21:10:42 crc kubenswrapper[4860]: I0121 21:10:42.902510 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerStarted","Data":"ad574bcd76cd727107043ba86bf21ea24269fefc8deb5e1cf8a15a01fe36fc4c"} Jan 21 21:10:43 crc kubenswrapper[4860]: I0121 21:10:43.578157 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:43 crc kubenswrapper[4860]: I0121 21:10:43.578176 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:43 crc kubenswrapper[4860]: E0121 21:10:43.578357 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:43 crc kubenswrapper[4860]: E0121 21:10:43.578395 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:43 crc kubenswrapper[4860]: E0121 21:10:43.916212 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:44 crc kubenswrapper[4860]: I0121 21:10:44.578058 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:44 crc kubenswrapper[4860]: I0121 21:10:44.578284 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:44 crc kubenswrapper[4860]: E0121 21:10:44.578478 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:44 crc kubenswrapper[4860]: E0121 21:10:44.578690 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:45 crc kubenswrapper[4860]: I0121 21:10:45.578565 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:45 crc kubenswrapper[4860]: E0121 21:10:45.578721 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:45 crc kubenswrapper[4860]: I0121 21:10:45.579156 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:45 crc kubenswrapper[4860]: E0121 21:10:45.579240 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:46 crc kubenswrapper[4860]: I0121 21:10:46.578528 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:46 crc kubenswrapper[4860]: I0121 21:10:46.578581 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:46 crc kubenswrapper[4860]: E0121 21:10:46.578763 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:46 crc kubenswrapper[4860]: E0121 21:10:46.578897 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:47 crc kubenswrapper[4860]: I0121 21:10:47.578861 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:47 crc kubenswrapper[4860]: E0121 21:10:47.579131 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:47 crc kubenswrapper[4860]: I0121 21:10:47.579472 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:47 crc kubenswrapper[4860]: E0121 21:10:47.579578 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:48 crc kubenswrapper[4860]: I0121 21:10:48.578642 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:48 crc kubenswrapper[4860]: I0121 21:10:48.578806 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:48 crc kubenswrapper[4860]: E0121 21:10:48.581560 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:48 crc kubenswrapper[4860]: E0121 21:10:48.581801 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:48 crc kubenswrapper[4860]: E0121 21:10:48.917615 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:49 crc kubenswrapper[4860]: I0121 21:10:49.578394 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:49 crc kubenswrapper[4860]: I0121 21:10:49.578465 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:49 crc kubenswrapper[4860]: E0121 21:10:49.578745 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:49 crc kubenswrapper[4860]: E0121 21:10:49.579230 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:50 crc kubenswrapper[4860]: I0121 21:10:50.578253 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:50 crc kubenswrapper[4860]: I0121 21:10:50.578782 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:50 crc kubenswrapper[4860]: E0121 21:10:50.578962 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:50 crc kubenswrapper[4860]: E0121 21:10:50.579217 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:51 crc kubenswrapper[4860]: I0121 21:10:51.577919 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:51 crc kubenswrapper[4860]: I0121 21:10:51.577919 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:51 crc kubenswrapper[4860]: E0121 21:10:51.578126 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:51 crc kubenswrapper[4860]: E0121 21:10:51.578192 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:52 crc kubenswrapper[4860]: I0121 21:10:52.577830 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:52 crc kubenswrapper[4860]: E0121 21:10:52.578099 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:52 crc kubenswrapper[4860]: I0121 21:10:52.579006 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:52 crc kubenswrapper[4860]: E0121 21:10:52.579274 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:53 crc kubenswrapper[4860]: I0121 21:10:53.578033 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:53 crc kubenswrapper[4860]: E0121 21:10:53.578830 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:53 crc kubenswrapper[4860]: I0121 21:10:53.578124 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:53 crc kubenswrapper[4860]: E0121 21:10:53.579105 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:53 crc kubenswrapper[4860]: E0121 21:10:53.919075 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:54 crc kubenswrapper[4860]: I0121 21:10:54.578418 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:54 crc kubenswrapper[4860]: I0121 21:10:54.578485 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:54 crc kubenswrapper[4860]: E0121 21:10:54.578811 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:54 crc kubenswrapper[4860]: E0121 21:10:54.578992 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:55 crc kubenswrapper[4860]: I0121 21:10:55.578286 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:55 crc kubenswrapper[4860]: I0121 21:10:55.578369 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:55 crc kubenswrapper[4860]: E0121 21:10:55.578650 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:55 crc kubenswrapper[4860]: E0121 21:10:55.578739 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:55 crc kubenswrapper[4860]: I0121 21:10:55.579008 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:10:56 crc kubenswrapper[4860]: I0121 21:10:56.579118 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:56 crc kubenswrapper[4860]: E0121 21:10:56.579294 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:56 crc kubenswrapper[4860]: I0121 21:10:56.579415 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:56 crc kubenswrapper[4860]: E0121 21:10:56.579566 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.578268 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.578283 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:57 crc kubenswrapper[4860]: E0121 21:10:57.578428 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:57 crc kubenswrapper[4860]: E0121 21:10:57.578528 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.912670 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rrwcr"] Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.912786 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:57 crc kubenswrapper[4860]: E0121 21:10:57.912878 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.965095 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/3.log" Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.967625 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerStarted","Data":"21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a"} Jan 21 21:10:57 crc kubenswrapper[4860]: I0121 21:10:57.968117 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:10:58 crc kubenswrapper[4860]: I0121 21:10:58.004923 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podStartSLOduration=126.004904513 podStartE2EDuration="2m6.004904513s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:10:58.004900993 +0000 UTC m=+150.227079493" watchObservedRunningTime="2026-01-21 21:10:58.004904513 +0000 UTC m=+150.227082983" Jan 21 21:10:58 crc kubenswrapper[4860]: I0121 21:10:58.578870 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:58 crc kubenswrapper[4860]: E0121 21:10:58.580134 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:10:58 crc kubenswrapper[4860]: E0121 21:10:58.920379 4860 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:10:58 crc kubenswrapper[4860]: I0121 21:10:58.986464 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:10:58 crc kubenswrapper[4860]: E0121 21:10:58.986658 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:13:00.986630827 +0000 UTC m=+273.208809297 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.189188 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.189262 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.189281 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.189300 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189410 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189472 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:13:01.189455981 +0000 UTC m=+273.411634451 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189603 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189631 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 21:13:01.189623106 +0000 UTC m=+273.411801576 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189705 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189718 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189733 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189758 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 21:13:01.18975106 +0000 UTC m=+273.411929530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189809 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189821 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189830 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.189850 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 21:13:01.189844623 +0000 UTC m=+273.412023093 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.578484 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.578528 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:10:59 crc kubenswrapper[4860]: I0121 21:10:59.578643 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.578633 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.578766 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:10:59 crc kubenswrapper[4860]: E0121 21:10:59.578812 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:11:00 crc kubenswrapper[4860]: I0121 21:11:00.579395 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:11:00 crc kubenswrapper[4860]: E0121 21:11:00.579668 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:11:01 crc kubenswrapper[4860]: I0121 21:11:01.578274 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:11:01 crc kubenswrapper[4860]: I0121 21:11:01.578322 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:01 crc kubenswrapper[4860]: I0121 21:11:01.578367 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:11:01 crc kubenswrapper[4860]: E0121 21:11:01.578431 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:11:01 crc kubenswrapper[4860]: E0121 21:11:01.578493 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:11:01 crc kubenswrapper[4860]: E0121 21:11:01.578687 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:11:02 crc kubenswrapper[4860]: I0121 21:11:02.103862 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:11:02 crc kubenswrapper[4860]: I0121 21:11:02.104060 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:11:02 crc kubenswrapper[4860]: I0121 21:11:02.578696 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:11:02 crc kubenswrapper[4860]: E0121 21:11:02.578863 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 21:11:03 crc kubenswrapper[4860]: I0121 21:11:03.578604 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:11:03 crc kubenswrapper[4860]: I0121 21:11:03.578684 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:03 crc kubenswrapper[4860]: I0121 21:11:03.578639 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:11:03 crc kubenswrapper[4860]: E0121 21:11:03.578879 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 21:11:03 crc kubenswrapper[4860]: E0121 21:11:03.579040 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 21:11:03 crc kubenswrapper[4860]: E0121 21:11:03.579207 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rrwcr" podUID="60ae05da-3403-4a2f-92f4-2ffa574a65a8" Jan 21 21:11:04 crc kubenswrapper[4860]: I0121 21:11:04.578598 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:11:04 crc kubenswrapper[4860]: I0121 21:11:04.580821 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 21:11:04 crc kubenswrapper[4860]: I0121 21:11:04.582055 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.578607 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.578703 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.578712 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.581843 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.581916 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.582464 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 21:11:05 crc kubenswrapper[4860]: I0121 21:11:05.583433 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.052219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.105854 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.106269 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.112013 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.112490 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.114869 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.115131 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.115255 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.115458 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.115486 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.115635 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.116560 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.117093 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.118214 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nd5p4"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.118591 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.129612 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.130181 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.130597 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.130715 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.130631 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.131734 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.132301 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nm4mt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.132957 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.153479 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.154040 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.158639 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.159003 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.159694 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.159869 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.160066 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.165537 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.168888 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.169305 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.169547 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.169719 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.169819 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170045 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170060 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170207 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170243 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170209 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170399 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170412 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170571 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170641 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170691 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170867 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.170982 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.171000 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.176427 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.177159 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.177589 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.177696 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.177799 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178005 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178018 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178099 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178128 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178166 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178239 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178335 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178349 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178470 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178482 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178510 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.178584 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.180262 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.180783 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.186899 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187031 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187408 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187552 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187702 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187811 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187977 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.188153 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.188250 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.188364 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.188468 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.187704 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.194186 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.196229 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.198597 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.200213 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.200803 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.222617 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.222634 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.223857 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.224452 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q9n6j"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.225209 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcx72"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.233285 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.234324 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.236665 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.242453 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.242710 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.243078 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.243198 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.248173 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.248378 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.248800 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.249413 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.250868 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.250979 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251000 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251042 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251091 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251158 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251162 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251248 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251269 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251347 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jx5dt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.252125 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251356 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.251543 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.252392 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cgwn6"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.253124 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.253205 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.253648 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.253862 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hv4bj"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.253920 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.257707 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.261552 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.261793 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262004 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.261810 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262148 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262324 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262343 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262495 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262616 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262645 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262176 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.262799 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.263407 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.263729 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q224d"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.264251 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.264644 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.264897 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.265340 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.265675 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.265850 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.266356 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.268738 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.269973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270030 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c01e58-4984-4ac3-951d-0f96fff19f57-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270092 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270119 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c01e58-4984-4ac3-951d-0f96fff19f57-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270168 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270189 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270226 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270252 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270279 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-etcd-client\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270343 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270374 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx42c\" (UniqueName: \"kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270396 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-audit-policies\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfnw\" (UniqueName: \"kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270467 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftn9q\" (UniqueName: \"kubernetes.io/projected/32bee613-dd08-4612-936c-dd68b630651e-kube-api-access-ftn9q\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-encryption-config\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270517 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkhd\" (UniqueName: \"kubernetes.io/projected/57019cb4-962f-4e52-889d-d11bac56fa88-kube-api-access-sdkhd\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270550 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270577 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsfcf\" (UniqueName: \"kubernetes.io/projected/82c01e58-4984-4ac3-951d-0f96fff19f57-kube-api-access-qsfcf\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270603 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270666 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpbg\" (UniqueName: \"kubernetes.io/projected/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-kube-api-access-khpbg\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270728 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270757 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270815 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270845 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270903 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-config\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.270930 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61cb972e-5da1-4381-9490-337000f6aa00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271106 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271133 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-serving-cert\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271156 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271175 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271226 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-trusted-ca\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271505 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271677 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271726 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn95x\" (UniqueName: \"kubernetes.io/projected/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-kube-api-access-xn95x\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271753 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271829 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271901 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-service-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.271946 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272124 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272249 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272287 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb972e-5da1-4381-9490-337000f6aa00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272310 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqpk\" (UniqueName: \"kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272389 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272414 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dncw\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-kube-api-access-4dncw\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-config\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272451 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-serving-cert\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272514 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32bee613-dd08-4612-936c-dd68b630651e-audit-dir\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272567 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57019cb4-962f-4e52-889d-d11bac56fa88-serving-cert\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272586 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.272602 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.275366 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.275590 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-v4hsh"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.276241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.276776 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.276965 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.277792 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.278060 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.279075 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.281264 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.282436 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.288091 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.288872 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.290554 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-slx45"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.291030 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.293210 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.293743 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.294217 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.294866 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.295630 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.296323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.297410 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lcbjc"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.298364 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.299006 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.299673 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.301679 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.302320 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.302769 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.303282 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.304499 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.305107 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.309567 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.311040 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.312884 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.313822 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.316200 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.317519 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b9252"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.318240 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.321185 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.324449 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.326247 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.329201 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.329498 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.341804 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.350199 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4dq5s"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.351320 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.352171 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.353239 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.354388 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nd5p4"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.355101 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.356288 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.357601 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.358823 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jx5dt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.359971 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcx72"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.362839 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.362898 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.363459 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.364667 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.366632 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q224d"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.368648 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.369214 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.369652 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b9252"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.370702 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nm4mt"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.371959 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.372872 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376653 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376706 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-encryption-config\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376903 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376952 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkswk\" (UniqueName: \"kubernetes.io/projected/35ea2f50-9645-4c72-85be-367a40e4a19e-kube-api-access-lkswk\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376984 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpbg\" (UniqueName: \"kubernetes.io/projected/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-kube-api-access-khpbg\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377025 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cdm\" (UniqueName: \"kubernetes.io/projected/d81c2475-b36c-44d5-a7da-1bec8c5871b0-kube-api-access-w8cdm\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88e1a68-3348-4ac7-b0b8-ba2215da118f-service-ca-bundle\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6x9\" (UniqueName: \"kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377088 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kn7\" (UniqueName: \"kubernetes.io/projected/402083eb-5844-4f8c-8dfa-067947a1bc48-kube-api-access-t7kn7\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377111 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377132 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377154 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvtx\" (UniqueName: \"kubernetes.io/projected/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-kube-api-access-rnvtx\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377174 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377192 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c1cfe23-822a-462f-9db6-b4d87eae0d58-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgtw\" (UniqueName: \"kubernetes.io/projected/08186c65-b069-4756-af19-5255a7a5fe2f-kube-api-access-rhgtw\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61cb972e-5da1-4381-9490-337000f6aa00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-trusted-ca\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377323 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377392 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377408 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-config\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377454 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn95x\" (UniqueName: \"kubernetes.io/projected/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-kube-api-access-xn95x\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377476 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/70aea1b0-13b2-43ee-a77d-10c3143e4a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377559 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ea2f50-9645-4c72-85be-367a40e4a19e-config\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-images\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377912 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378069 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.376673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378124 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.377698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkzs\" (UniqueName: \"kubernetes.io/projected/19b5214f-7427-49e9-a40e-2c295e1600d4-kube-api-access-nwkzs\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378448 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx7c\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-kube-api-access-gmx7c\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378471 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dncw\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-kube-api-access-4dncw\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-config\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378529 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32bee613-dd08-4612-936c-dd68b630651e-audit-dir\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378548 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378567 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378584 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-auth-proxy-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378614 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ea2f50-9645-4c72-85be-367a40e4a19e-serving-cert\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378638 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57019cb4-962f-4e52-889d-d11bac56fa88-serving-cert\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378665 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378685 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49544\" (UniqueName: \"kubernetes.io/projected/63cbab1e-f06a-4692-836f-3cdbb9260104-kube-api-access-49544\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378707 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-default-certificate\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378748 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5933589-42d6-47af-b723-2af986d94c98-proxy-tls\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378766 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-node-pullsecrets\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378809 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-etcd-client\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378827 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx254\" (UniqueName: \"kubernetes.io/projected/b5933589-42d6-47af-b723-2af986d94c98-kube-api-access-rx254\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378847 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378874 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p846q\" (UniqueName: \"kubernetes.io/projected/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-kube-api-access-p846q\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.378895 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379135 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c01e58-4984-4ac3-951d-0f96fff19f57-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379162 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-metrics-certs\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c01e58-4984-4ac3-951d-0f96fff19f57-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379240 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379258 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-serving-cert\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379304 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-config\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379322 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2f2g\" (UniqueName: \"kubernetes.io/projected/84721999-239a-421e-a892-de0042ff1937-kube-api-access-w2f2g\") pod \"migrator-59844c95c7-dcw54\" (UID: \"84721999-239a-421e-a892-de0042ff1937\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379340 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379358 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379380 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379396 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-etcd-client\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379415 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx42c\" (UniqueName: \"kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-audit-policies\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-profile-collector-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379492 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftn9q\" (UniqueName: \"kubernetes.io/projected/32bee613-dd08-4612-936c-dd68b630651e-kube-api-access-ftn9q\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379511 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfnw\" (UniqueName: \"kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkhd\" (UniqueName: \"kubernetes.io/projected/57019cb4-962f-4e52-889d-d11bac56fa88-kube-api-access-sdkhd\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379548 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsfcf\" (UniqueName: \"kubernetes.io/projected/82c01e58-4984-4ac3-951d-0f96fff19f57-kube-api-access-qsfcf\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379566 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c337e9fe-a7db-4b56-92c4-82905fb59d53-metrics-tls\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379605 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379622 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6j9\" (UniqueName: \"kubernetes.io/projected/70aea1b0-13b2-43ee-a77d-10c3143e4a95-kube-api-access-cb6j9\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379643 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d81c2475-b36c-44d5-a7da-1bec8c5871b0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379682 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-client\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379700 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08186c65-b069-4756-af19-5255a7a5fe2f-machine-approver-tls\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379718 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379737 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-stats-auth\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379755 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-srv-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379795 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-config\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379815 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b5214f-7427-49e9-a40e-2c295e1600d4-proxy-tls\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379861 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-etcd-serving-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379882 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-image-import-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379900 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-serving-cert\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379919 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-serving-cert\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.379981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380040 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380063 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380086 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b5214f-7427-49e9-a40e-2c295e1600d4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380108 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380567 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380661 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-service-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb972e-5da1-4381-9490-337000f6aa00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380890 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lqpk\" (UniqueName: \"kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380926 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380960 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-audit\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-audit-dir\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.380995 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c337e9fe-a7db-4b56-92c4-82905fb59d53-trusted-ca\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-serving-cert\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381057 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e29e04b-89f7-4d77-8e17-0355493a1d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381075 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/265b2226-a08f-4ba0-b20a-25e422c21c37-metrics-tls\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381102 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381124 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-images\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381172 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdt7\" (UniqueName: \"kubernetes.io/projected/265b2226-a08f-4ba0-b20a-25e422c21c37-kube-api-access-hqdt7\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381194 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmlf\" (UniqueName: \"kubernetes.io/projected/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-kube-api-access-wfmlf\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381213 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381236 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-encryption-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381255 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-service-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381272 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5j8\" (UniqueName: \"kubernetes.io/projected/b88e1a68-3348-4ac7-b0b8-ba2215da118f-kube-api-access-kt5j8\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381290 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhhg\" (UniqueName: \"kubernetes.io/projected/3a3fc408-742d-46bb-93cd-05343faababf-kube-api-access-gwhhg\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381302 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381310 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4crk\" (UniqueName: \"kubernetes.io/projected/8445d936-5e91-4817-afda-a75203024c29-kube-api-access-z4crk\") pod \"downloads-7954f5f757-hv4bj\" (UID: \"8445d936-5e91-4817-afda-a75203024c29\") " pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381404 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwq7\" (UniqueName: \"kubernetes.io/projected/2e29e04b-89f7-4d77-8e17-0355493a1d9f-kube-api-access-zlwq7\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381428 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgqx\" (UniqueName: \"kubernetes.io/projected/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-kube-api-access-vpgqx\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381460 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381490 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381518 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1cfe23-822a-462f-9db6-b4d87eae0d58-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1cfe23-822a-462f-9db6-b4d87eae0d58-config\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.381838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.382876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.382886 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.383056 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.383855 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-config\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.384297 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.384511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.385082 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.385309 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.385640 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cgwn6"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.386646 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb972e-5da1-4381-9490-337000f6aa00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.386872 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.387120 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57019cb4-962f-4e52-889d-d11bac56fa88-trusted-ca\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.387251 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lcbjc"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.388609 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.389324 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.389420 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32bee613-dd08-4612-936c-dd68b630651e-audit-dir\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.390861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c01e58-4984-4ac3-951d-0f96fff19f57-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.390861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-service-ca-bundle\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.391040 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hv4bj"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.391163 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.392464 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.391771 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.391866 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-config\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.392143 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.393044 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.393196 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.393288 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.393500 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.391617 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32bee613-dd08-4612-936c-dd68b630651e-audit-policies\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.393792 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.394078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.394564 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395263 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395380 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-slx45"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395459 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395581 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-serving-cert\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.395614 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.396948 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.398794 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.399676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-etcd-client\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.399843 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.401256 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-encryption-config\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.401943 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.403324 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61cb972e-5da1-4381-9490-337000f6aa00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.403411 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q9n6j"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.403870 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c01e58-4984-4ac3-951d-0f96fff19f57-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.405296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.405434 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.406425 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.406591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bee613-dd08-4612-936c-dd68b630651e-serving-cert\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.406923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.407618 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rkt4n"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.408692 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.408960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57019cb4-962f-4e52-889d-d11bac56fa88-serving-cert\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.409657 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-d6fh7"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.409824 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.410352 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.410485 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.413177 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.415682 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.419707 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4dq5s"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.420735 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rkt4n"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.433997 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-s24bn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.435036 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.436422 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.436926 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s24bn"] Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.450369 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.470504 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482592 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-encryption-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-service-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482650 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5j8\" (UniqueName: \"kubernetes.io/projected/b88e1a68-3348-4ac7-b0b8-ba2215da118f-kube-api-access-kt5j8\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwhhg\" (UniqueName: \"kubernetes.io/projected/3a3fc408-742d-46bb-93cd-05343faababf-kube-api-access-gwhhg\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482693 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlwq7\" (UniqueName: \"kubernetes.io/projected/2e29e04b-89f7-4d77-8e17-0355493a1d9f-kube-api-access-zlwq7\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgqx\" (UniqueName: \"kubernetes.io/projected/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-kube-api-access-vpgqx\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482730 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4crk\" (UniqueName: \"kubernetes.io/projected/8445d936-5e91-4817-afda-a75203024c29-kube-api-access-z4crk\") pod \"downloads-7954f5f757-hv4bj\" (UID: \"8445d936-5e91-4817-afda-a75203024c29\") " pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482786 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1cfe23-822a-462f-9db6-b4d87eae0d58-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1cfe23-822a-462f-9db6-b4d87eae0d58-config\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482828 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkswk\" (UniqueName: \"kubernetes.io/projected/35ea2f50-9645-4c72-85be-367a40e4a19e-kube-api-access-lkswk\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cdm\" (UniqueName: \"kubernetes.io/projected/d81c2475-b36c-44d5-a7da-1bec8c5871b0-kube-api-access-w8cdm\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482897 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6x9\" (UniqueName: \"kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482917 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88e1a68-3348-4ac7-b0b8-ba2215da118f-service-ca-bundle\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kn7\" (UniqueName: \"kubernetes.io/projected/402083eb-5844-4f8c-8dfa-067947a1bc48-kube-api-access-t7kn7\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.482991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvtx\" (UniqueName: \"kubernetes.io/projected/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-kube-api-access-rnvtx\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483010 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhgtw\" (UniqueName: \"kubernetes.io/projected/08186c65-b069-4756-af19-5255a7a5fe2f-kube-api-access-rhgtw\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483030 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c1cfe23-822a-462f-9db6-b4d87eae0d58-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483050 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-config\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483130 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/70aea1b0-13b2-43ee-a77d-10c3143e4a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483153 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-images\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483196 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ea2f50-9645-4c72-85be-367a40e4a19e-config\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483245 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkzs\" (UniqueName: \"kubernetes.io/projected/19b5214f-7427-49e9-a40e-2c295e1600d4-kube-api-access-nwkzs\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx7c\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-kube-api-access-gmx7c\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483296 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483323 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483340 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-auth-proxy-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483361 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49544\" (UniqueName: \"kubernetes.io/projected/63cbab1e-f06a-4692-836f-3cdbb9260104-kube-api-access-49544\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483379 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ea2f50-9645-4c72-85be-367a40e4a19e-serving-cert\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-default-certificate\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483457 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-node-pullsecrets\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483480 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5933589-42d6-47af-b723-2af986d94c98-proxy-tls\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483497 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483518 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-etcd-client\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483517 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-service-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483536 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx254\" (UniqueName: \"kubernetes.io/projected/b5933589-42d6-47af-b723-2af986d94c98-kube-api-access-rx254\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483556 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p846q\" (UniqueName: \"kubernetes.io/projected/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-kube-api-access-p846q\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483595 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-metrics-certs\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483615 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-serving-cert\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-config\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483665 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2f2g\" (UniqueName: \"kubernetes.io/projected/84721999-239a-421e-a892-de0042ff1937-kube-api-access-w2f2g\") pod \"migrator-59844c95c7-dcw54\" (UID: \"84721999-239a-421e-a892-de0042ff1937\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483687 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483706 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483724 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483744 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-profile-collector-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483780 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c337e9fe-a7db-4b56-92c4-82905fb59d53-metrics-tls\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483811 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6j9\" (UniqueName: \"kubernetes.io/projected/70aea1b0-13b2-43ee-a77d-10c3143e4a95-kube-api-access-cb6j9\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483836 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-client\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483855 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08186c65-b069-4756-af19-5255a7a5fe2f-machine-approver-tls\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483882 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d81c2475-b36c-44d5-a7da-1bec8c5871b0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483901 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-srv-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483917 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-stats-auth\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483973 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b5214f-7427-49e9-a40e-2c295e1600d4-proxy-tls\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.483994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-etcd-serving-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484016 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-image-import-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484034 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-serving-cert\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484056 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b5214f-7427-49e9-a40e-2c295e1600d4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484085 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c337e9fe-a7db-4b56-92c4-82905fb59d53-trusted-ca\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-audit\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-audit-dir\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e29e04b-89f7-4d77-8e17-0355493a1d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484239 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484261 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/265b2226-a08f-4ba0-b20a-25e422c21c37-metrics-tls\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484285 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-images\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfmlf\" (UniqueName: \"kubernetes.io/projected/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-kube-api-access-wfmlf\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484322 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdt7\" (UniqueName: \"kubernetes.io/projected/265b2226-a08f-4ba0-b20a-25e422c21c37-kube-api-access-hqdt7\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.484406 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-images\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485059 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485109 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-config\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485701 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485819 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-node-pullsecrets\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.485980 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-config\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.486489 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-ca\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487019 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487072 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b5214f-7427-49e9-a40e-2c295e1600d4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487131 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08186c65-b069-4756-af19-5255a7a5fe2f-auth-proxy-config\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487216 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a3fc408-742d-46bb-93cd-05343faababf-audit-dir\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487505 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.487889 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-etcd-serving-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.488262 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-audit\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.488328 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-encryption-config\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.488400 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a3fc408-742d-46bb-93cd-05343faababf-image-import-ca\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.489062 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-serving-cert\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.490004 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.491604 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08186c65-b069-4756-af19-5255a7a5fe2f-machine-approver-tls\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.492106 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a3fc408-742d-46bb-93cd-05343faababf-etcd-client\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.493647 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-etcd-client\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.494085 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.494739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63cbab1e-f06a-4692-836f-3cdbb9260104-serving-cert\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.495266 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.497008 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1cfe23-822a-462f-9db6-b4d87eae0d58-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.511239 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.514834 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1cfe23-822a-462f-9db6-b4d87eae0d58-config\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.529789 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.549536 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.560823 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/265b2226-a08f-4ba0-b20a-25e422c21c37-metrics-tls\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.569086 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.590220 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.609550 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.629825 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.649218 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.658077 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c337e9fe-a7db-4b56-92c4-82905fb59d53-metrics-tls\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.677086 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.679729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c337e9fe-a7db-4b56-92c4-82905fb59d53-trusted-ca\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.689740 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.709025 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.730172 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.737383 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-metrics-certs\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.749126 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.768814 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.780430 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-default-certificate\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.789305 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.801819 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b88e1a68-3348-4ac7-b0b8-ba2215da118f-stats-auth\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.809317 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.816052 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88e1a68-3348-4ac7-b0b8-ba2215da118f-service-ca-bundle\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.830288 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.849442 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.868467 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.889376 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.909523 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.929432 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.941672 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/70aea1b0-13b2-43ee-a77d-10c3143e4a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.949910 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.974220 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 21:11:06 crc kubenswrapper[4860]: I0121 21:11:06.991498 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.009641 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.022677 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d81c2475-b36c-44d5-a7da-1bec8c5871b0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.030578 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.042994 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-srv-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.050088 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.057407 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/402083eb-5844-4f8c-8dfa-067947a1bc48-profile-collector-cert\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.059807 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.069865 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.090108 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.109252 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.122078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ea2f50-9645-4c72-85be-367a40e4a19e-serving-cert\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.130035 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.136790 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ea2f50-9645-4c72-85be-367a40e4a19e-config\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.149955 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.169829 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.177757 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.189569 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.209586 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.230005 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.249738 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.258756 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.270028 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.276384 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.290106 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.306972 4860 request.go:700] Waited for 1.010429958s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.309150 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.321576 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b5214f-7427-49e9-a40e-2c295e1600d4-proxy-tls\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.329367 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.349478 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.362775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e29e04b-89f7-4d77-8e17-0355493a1d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.370159 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.409588 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.419475 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5933589-42d6-47af-b723-2af986d94c98-images\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.428847 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.439592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5933589-42d6-47af-b723-2af986d94c98-proxy-tls\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.449595 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.468281 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.489231 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.516995 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.528478 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.548854 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.568966 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.588632 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.609046 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.629600 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.648559 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.668877 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.688986 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.709234 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.728299 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.749307 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.771396 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.790342 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.810142 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.829697 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.849705 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.869487 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.890141 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.910244 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.929685 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.950369 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.970294 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 21:11:07 crc kubenswrapper[4860]: I0121 21:11:07.988712 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.025248 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpbg\" (UniqueName: \"kubernetes.io/projected/ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6-kube-api-access-khpbg\") pod \"authentication-operator-69f744f599-nd5p4\" (UID: \"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.058484 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn95x\" (UniqueName: \"kubernetes.io/projected/d62e94e1-ec68-4f36-9de7-005b8ed5a0ac-kube-api-access-xn95x\") pod \"openshift-apiserver-operator-796bbdcf4f-nzkbt\" (UID: \"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.080050 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkhd\" (UniqueName: \"kubernetes.io/projected/57019cb4-962f-4e52-889d-d11bac56fa88-kube-api-access-sdkhd\") pod \"console-operator-58897d9998-nm4mt\" (UID: \"57019cb4-962f-4e52-889d-d11bac56fa88\") " pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.090176 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsfcf\" (UniqueName: \"kubernetes.io/projected/82c01e58-4984-4ac3-951d-0f96fff19f57-kube-api-access-qsfcf\") pod \"openshift-controller-manager-operator-756b6f6bc6-gnn4g\" (UID: \"82c01e58-4984-4ac3-951d-0f96fff19f57\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.102042 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.121255 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.121600 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.127560 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2\") pod \"console-f9d7485db-hbh47\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.149067 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lqpk\" (UniqueName: \"kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk\") pod \"oauth-openshift-558db77b4-fvk47\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.167079 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dncw\" (UniqueName: \"kubernetes.io/projected/61cb972e-5da1-4381-9490-337000f6aa00-kube-api-access-4dncw\") pod \"cluster-image-registry-operator-dc59b4c8b-r8wbl\" (UID: \"61cb972e-5da1-4381-9490-337000f6aa00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.185751 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftn9q\" (UniqueName: \"kubernetes.io/projected/32bee613-dd08-4612-936c-dd68b630651e-kube-api-access-ftn9q\") pod \"apiserver-7bbb656c7d-pr2fp\" (UID: \"32bee613-dd08-4612-936c-dd68b630651e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.203271 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfnw\" (UniqueName: \"kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw\") pod \"controller-manager-879f6c89f-xxb4c\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.224753 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx42c\" (UniqueName: \"kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c\") pod \"route-controller-manager-6576b87f9c-dzzs7\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.249319 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.249401 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.257430 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.301098 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.323961 4860 request.go:700] Waited for 1.913166427s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.335527 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.336107 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.336203 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.340063 4860 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.344848 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.351623 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.354633 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.368126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.378131 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.387112 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.390403 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.409875 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.414313 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.446960 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nm4mt"] Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.458947 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5j8\" (UniqueName: \"kubernetes.io/projected/b88e1a68-3348-4ac7-b0b8-ba2215da118f-kube-api-access-kt5j8\") pod \"router-default-5444994796-v4hsh\" (UID: \"b88e1a68-3348-4ac7-b0b8-ba2215da118f\") " pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.465051 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cdm\" (UniqueName: \"kubernetes.io/projected/d81c2475-b36c-44d5-a7da-1bec8c5871b0-kube-api-access-w8cdm\") pod \"package-server-manager-789f6589d5-ncbcn\" (UID: \"d81c2475-b36c-44d5-a7da-1bec8c5871b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.505808 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt"] Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.515425 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4crk\" (UniqueName: \"kubernetes.io/projected/8445d936-5e91-4817-afda-a75203024c29-kube-api-access-z4crk\") pod \"downloads-7954f5f757-hv4bj\" (UID: \"8445d936-5e91-4817-afda-a75203024c29\") " pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:08 crc kubenswrapper[4860]: W0121 21:11:08.516781 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57019cb4_962f_4e52_889d_d11bac56fa88.slice/crio-ed0f585cf68ffa3e3a283ed0f83213251b6132ae9585d5f2d9c093461e9b5d8d WatchSource:0}: Error finding container ed0f585cf68ffa3e3a283ed0f83213251b6132ae9585d5f2d9c093461e9b5d8d: Status 404 returned error can't find the container with id ed0f585cf68ffa3e3a283ed0f83213251b6132ae9585d5f2d9c093461e9b5d8d Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.523537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwhhg\" (UniqueName: \"kubernetes.io/projected/3a3fc408-742d-46bb-93cd-05343faababf-kube-api-access-gwhhg\") pod \"apiserver-76f77b778f-q9n6j\" (UID: \"3a3fc408-742d-46bb-93cd-05343faababf\") " pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.533696 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlwq7\" (UniqueName: \"kubernetes.io/projected/2e29e04b-89f7-4d77-8e17-0355493a1d9f-kube-api-access-zlwq7\") pod \"multus-admission-controller-857f4d67dd-lcbjc\" (UID: \"2e29e04b-89f7-4d77-8e17-0355493a1d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.544952 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.549219 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgqx\" (UniqueName: \"kubernetes.io/projected/ecb5870e-f9cf-4b70-ac31-4d62d2902bf8-kube-api-access-vpgqx\") pod \"cluster-samples-operator-665b6dd947-7vdnh\" (UID: \"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.609987 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.615395 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkswk\" (UniqueName: \"kubernetes.io/projected/35ea2f50-9645-4c72-85be-367a40e4a19e-kube-api-access-lkswk\") pod \"service-ca-operator-777779d784-slx45\" (UID: \"35ea2f50-9645-4c72-85be-367a40e4a19e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.623359 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.642093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c1cfe23-822a-462f-9db6-b4d87eae0d58-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6scsc\" (UID: \"5c1cfe23-822a-462f-9db6-b4d87eae0d58\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.645514 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.646885 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kn7\" (UniqueName: \"kubernetes.io/projected/402083eb-5844-4f8c-8dfa-067947a1bc48-kube-api-access-t7kn7\") pod \"catalog-operator-68c6474976-stn5k\" (UID: \"402083eb-5844-4f8c-8dfa-067947a1bc48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.651769 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.654979 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvtx\" (UniqueName: \"kubernetes.io/projected/f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde-kube-api-access-rnvtx\") pod \"openshift-config-operator-7777fb866f-gwbfn\" (UID: \"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.660111 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.680732 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhgtw\" (UniqueName: \"kubernetes.io/projected/08186c65-b069-4756-af19-5255a7a5fe2f-kube-api-access-rhgtw\") pod \"machine-approver-56656f9798-g8tw8\" (UID: \"08186c65-b069-4756-af19-5255a7a5fe2f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.694048 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.694990 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6x9\" (UniqueName: \"kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9\") pod \"collect-profiles-29483820-nknlp\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.733997 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.757739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p846q\" (UniqueName: \"kubernetes.io/projected/f9fa07de-d775-4c9b-af3e-03b39e6c33b6-kube-api-access-p846q\") pod \"kube-storage-version-migrator-operator-b67b599dd-t9nqj\" (UID: \"f9fa07de-d775-4c9b-af3e-03b39e6c33b6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.760744 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkzs\" (UniqueName: \"kubernetes.io/projected/19b5214f-7427-49e9-a40e-2c295e1600d4-kube-api-access-nwkzs\") pod \"machine-config-controller-84d6567774-c4t7l\" (UID: \"19b5214f-7427-49e9-a40e-2c295e1600d4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.766779 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.771303 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx7c\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-kube-api-access-gmx7c\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.787069 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.788560 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6j9\" (UniqueName: \"kubernetes.io/projected/70aea1b0-13b2-43ee-a77d-10c3143e4a95-kube-api-access-cb6j9\") pod \"control-plane-machine-set-operator-78cbb6b69f-4x452\" (UID: \"70aea1b0-13b2-43ee-a77d-10c3143e4a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.798752 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2f2g\" (UniqueName: \"kubernetes.io/projected/84721999-239a-421e-a892-de0042ff1937-kube-api-access-w2f2g\") pod \"migrator-59844c95c7-dcw54\" (UID: \"84721999-239a-421e-a892-de0042ff1937\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.801137 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c337e9fe-a7db-4b56-92c4-82905fb59d53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q224d\" (UID: \"c337e9fe-a7db-4b56-92c4-82905fb59d53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.811154 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx254\" (UniqueName: \"kubernetes.io/projected/b5933589-42d6-47af-b723-2af986d94c98-kube-api-access-rx254\") pod \"machine-config-operator-74547568cd-kc7kn\" (UID: \"b5933589-42d6-47af-b723-2af986d94c98\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.826214 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfmlf\" (UniqueName: \"kubernetes.io/projected/40070d0f-4d18-4d7c-a85a-cd2f904ea27a-kube-api-access-wfmlf\") pod \"machine-api-operator-5694c8668f-jx5dt\" (UID: \"40070d0f-4d18-4d7c-a85a-cd2f904ea27a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.854559 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" Jan 21 21:11:08 crc kubenswrapper[4860]: W0121 21:11:08.865010 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88e1a68_3348_4ac7_b0b8_ba2215da118f.slice/crio-d1aa1fbcb3b05b71c2d7909386e8895b6d162cda9ea59e879692d74db752d483 WatchSource:0}: Error finding container d1aa1fbcb3b05b71c2d7909386e8895b6d162cda9ea59e879692d74db752d483: Status 404 returned error can't find the container with id d1aa1fbcb3b05b71c2d7909386e8895b6d162cda9ea59e879692d74db752d483 Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.865461 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.875663 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49544\" (UniqueName: \"kubernetes.io/projected/63cbab1e-f06a-4692-836f-3cdbb9260104-kube-api-access-49544\") pod \"etcd-operator-b45778765-tcx72\" (UID: \"63cbab1e-f06a-4692-836f-3cdbb9260104\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.876217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdt7\" (UniqueName: \"kubernetes.io/projected/265b2226-a08f-4ba0-b20a-25e422c21c37-kube-api-access-hqdt7\") pod \"dns-operator-744455d44c-cgwn6\" (UID: \"265b2226-a08f-4ba0-b20a-25e422c21c37\") " pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.924973 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.931265 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.961901 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.973443 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ksp8\" (UniqueName: \"kubernetes.io/projected/b56d611d-64a3-491f-b878-da0793846cef-kube-api-access-6ksp8\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974690 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-cabundle\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974805 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqxc8\" (UniqueName: \"kubernetes.io/projected/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-kube-api-access-kqxc8\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974867 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974884 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.974987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a070564-7a41-4207-b27f-d6ebddec9a55-tmpfs\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975014 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66z9l\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975033 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751528b2-dccf-44a3-abc3-d044da642fd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-srv-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975143 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975201 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975219 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmrc\" (UniqueName: \"kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975236 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-webhook-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:08 crc kubenswrapper[4860]: I0121 21:11:08.975254 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751528b2-dccf-44a3-abc3-d044da642fd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006376 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/751528b2-dccf-44a3-abc3-d044da642fd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006410 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006462 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006491 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006734 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86m9\" (UniqueName: \"kubernetes.io/projected/5a070564-7a41-4207-b27f-d6ebddec9a55-kube-api-access-g86m9\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006791 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sql88\" (UniqueName: \"kubernetes.io/projected/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-kube-api-access-sql88\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006816 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006836 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971caae6-3ca9-4e02-852f-47abcf2bff31-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006881 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006916 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-cert\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006968 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.006988 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971caae6-3ca9-4e02-852f-47abcf2bff31-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.007119 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-key\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.007182 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971caae6-3ca9-4e02-852f-47abcf2bff31-config\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.014233 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.514216652 +0000 UTC m=+161.736395122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.016745 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.018864 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.024465 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.048259 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" event={"ID":"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac","Type":"ContainerStarted","Data":"a36e108cb1b75c66fa3d890a14578d9f27f2159cfb3e38c2aa64bc9952545e19"} Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.048308 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" event={"ID":"d62e94e1-ec68-4f36-9de7-005b8ed5a0ac","Type":"ContainerStarted","Data":"0909ec99213d9d483533c86521049eaa15f9782384e575c44feef3ab3fa2dc87"} Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.050810 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v4hsh" event={"ID":"b88e1a68-3348-4ac7-b0b8-ba2215da118f","Type":"ContainerStarted","Data":"d1aa1fbcb3b05b71c2d7909386e8895b6d162cda9ea59e879692d74db752d483"} Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.114510 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.115493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.115873 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a070564-7a41-4207-b27f-d6ebddec9a55-tmpfs\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.115925 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66z9l\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.115982 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751528b2-dccf-44a3-abc3-d044da642fd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116013 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-srv-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-certs\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116119 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116139 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmrc\" (UniqueName: \"kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116155 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-webhook-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751528b2-dccf-44a3-abc3-d044da642fd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-plugins-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-node-bootstrap-token\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116301 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/751528b2-dccf-44a3-abc3-d044da642fd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116320 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116343 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11785eb3-a6cf-47e9-b902-3733703720ca-metrics-tls\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116380 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116397 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-socket-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116463 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-registration-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116485 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86m9\" (UniqueName: \"kubernetes.io/projected/5a070564-7a41-4207-b27f-d6ebddec9a55-kube-api-access-g86m9\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116508 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sql88\" (UniqueName: \"kubernetes.io/projected/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-kube-api-access-sql88\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116526 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116548 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971caae6-3ca9-4e02-852f-47abcf2bff31-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116580 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116599 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11785eb3-a6cf-47e9-b902-3733703720ca-config-volume\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116622 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-cert\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116641 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-csi-data-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116664 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116685 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971caae6-3ca9-4e02-852f-47abcf2bff31-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116703 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8z9\" (UniqueName: \"kubernetes.io/projected/11785eb3-a6cf-47e9-b902-3733703720ca-kube-api-access-4h8z9\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116740 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-key\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971caae6-3ca9-4e02-852f-47abcf2bff31-config\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116783 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqqc\" (UniqueName: \"kubernetes.io/projected/c8477838-23bd-4bd8-8e37-bdf34bff841b-kube-api-access-khqqc\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116823 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ksp8\" (UniqueName: \"kubernetes.io/projected/b56d611d-64a3-491f-b878-da0793846cef-kube-api-access-6ksp8\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116845 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-mountpoint-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-cabundle\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.116963 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcjv\" (UniqueName: \"kubernetes.io/projected/bfcb6184-d86e-4425-9c9c-99ec900dea78-kube-api-access-rwcjv\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.117015 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqxc8\" (UniqueName: \"kubernetes.io/projected/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-kube-api-access-kqxc8\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.117037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.117057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.121643 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.621613252 +0000 UTC m=+161.843791732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.122444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a070564-7a41-4207-b27f-d6ebddec9a55-tmpfs\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.128256 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.133830 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971caae6-3ca9-4e02-852f-47abcf2bff31-config\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.138998 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-cabundle\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.139198 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.139832 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.140165 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.140667 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751528b2-dccf-44a3-abc3-d044da642fd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.144923 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" event={"ID":"57019cb4-962f-4e52-889d-d11bac56fa88","Type":"ContainerStarted","Data":"a7732d7d656f290c80ece6ffbef8eb8ee8be096f8412a67be00c88b2a4a0bfd7"} Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.144995 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" event={"ID":"57019cb4-962f-4e52-889d-d11bac56fa88","Type":"ContainerStarted","Data":"ed0f585cf68ffa3e3a283ed0f83213251b6132ae9585d5f2d9c093461e9b5d8d"} Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.153370 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-webhook-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.154188 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.155821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sql88\" (UniqueName: \"kubernetes.io/projected/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-kube-api-access-sql88\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.156399 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-srv-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.158039 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971caae6-3ca9-4e02-852f-47abcf2bff31-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.166666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d611d-64a3-491f-b878-da0793846cef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.166738 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.167265 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a070564-7a41-4207-b27f-d6ebddec9a55-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.167317 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.171044 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/751528b2-dccf-44a3-abc3-d044da642fd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.175031 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.179082 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751528b2-dccf-44a3-abc3-d044da642fd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hs75g\" (UID: \"751528b2-dccf-44a3-abc3-d044da642fd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.202975 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-cert\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.203524 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.210096 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac-signing-key\") pod \"service-ca-9c57cc56f-b9252\" (UID: \"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac\") " pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.224580 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqqc\" (UniqueName: \"kubernetes.io/projected/c8477838-23bd-4bd8-8e37-bdf34bff841b-kube-api-access-khqqc\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.224708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-mountpoint-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.224773 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcjv\" (UniqueName: \"kubernetes.io/projected/bfcb6184-d86e-4425-9c9c-99ec900dea78-kube-api-access-rwcjv\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.224954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-certs\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.225369 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-plugins-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.225589 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-mountpoint-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.225910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-node-bootstrap-token\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226063 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11785eb3-a6cf-47e9-b902-3733703720ca-metrics-tls\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226138 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmrc\" (UniqueName: \"kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc\") pod \"marketplace-operator-79b997595-k7nfg\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-socket-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqxc8\" (UniqueName: \"kubernetes.io/projected/be1fd5b6-dccd-44e4-b38b-8c0ca448f013-kube-api-access-kqxc8\") pod \"ingress-canary-4dq5s\" (UID: \"be1fd5b6-dccd-44e4-b38b-8c0ca448f013\") " pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226174 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-plugins-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226378 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226640 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-registration-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226917 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11785eb3-a6cf-47e9-b902-3733703720ca-config-volume\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.226991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-csi-data-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.227040 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8z9\" (UniqueName: \"kubernetes.io/projected/11785eb3-a6cf-47e9-b902-3733703720ca-kube-api-access-4h8z9\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.227089 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ksp8\" (UniqueName: \"kubernetes.io/projected/b56d611d-64a3-491f-b878-da0793846cef-kube-api-access-6ksp8\") pod \"olm-operator-6b444d44fb-vz8ns\" (UID: \"b56d611d-64a3-491f-b878-da0793846cef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.227220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-socket-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.227680 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.727652429 +0000 UTC m=+161.949830899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.227923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11785eb3-a6cf-47e9-b902-3733703720ca-config-volume\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.227987 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-registration-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.228057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb6184-d86e-4425-9c9c-99ec900dea78-csi-data-dir\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.257292 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86m9\" (UniqueName: \"kubernetes.io/projected/5a070564-7a41-4207-b27f-d6ebddec9a55-kube-api-access-g86m9\") pod \"packageserver-d55dfcdfc-ftls8\" (UID: \"5a070564-7a41-4207-b27f-d6ebddec9a55\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.257969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971caae6-3ca9-4e02-852f-47abcf2bff31-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-trsgn\" (UID: \"971caae6-3ca9-4e02-852f-47abcf2bff31\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.258431 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66z9l\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.259726 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-certs\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.260286 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8477838-23bd-4bd8-8e37-bdf34bff841b-node-bootstrap-token\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.260958 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11785eb3-a6cf-47e9-b902-3733703720ca-metrics-tls\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.274332 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.296198 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcjv\" (UniqueName: \"kubernetes.io/projected/bfcb6184-d86e-4425-9c9c-99ec900dea78-kube-api-access-rwcjv\") pod \"csi-hostpathplugin-rkt4n\" (UID: \"bfcb6184-d86e-4425-9c9c-99ec900dea78\") " pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.299407 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqqc\" (UniqueName: \"kubernetes.io/projected/c8477838-23bd-4bd8-8e37-bdf34bff841b-kube-api-access-khqqc\") pod \"machine-config-server-d6fh7\" (UID: \"c8477838-23bd-4bd8-8e37-bdf34bff841b\") " pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.310955 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.314293 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8z9\" (UniqueName: \"kubernetes.io/projected/11785eb3-a6cf-47e9-b902-3733703720ca-kube-api-access-4h8z9\") pod \"dns-default-s24bn\" (UID: \"11785eb3-a6cf-47e9-b902-3733703720ca\") " pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.314373 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.315601 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nd5p4"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.315949 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.327434 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.328331 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.328449 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.828417287 +0000 UTC m=+162.050595757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.329041 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.329543 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.829532301 +0000 UTC m=+162.051710781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.331271 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.352316 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-b9252" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.373692 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.380344 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.380755 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4dq5s" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.488998 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.489136 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d6fh7" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.489510 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:09.98947525 +0000 UTC m=+162.211653720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.489990 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.490322 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.495723 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:11:09 crc kubenswrapper[4860]: W0121 21:11:09.500589 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded4c1784_1d20_4e8f_b8c9_ee3a641bf6c6.slice/crio-244a0b4fb67e1fcdf98aa2241e09f1f35152bf2af3bf51cc6db4c41d771b82e2 WatchSource:0}: Error finding container 244a0b4fb67e1fcdf98aa2241e09f1f35152bf2af3bf51cc6db4c41d771b82e2: Status 404 returned error can't find the container with id 244a0b4fb67e1fcdf98aa2241e09f1f35152bf2af3bf51cc6db4c41d771b82e2 Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.606963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.607605 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.107575364 +0000 UTC m=+162.329753834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.708253 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08186c65_b069_4756_af19_5255a7a5fe2f.slice/crio-ad0bb7b15f5fe07bd5ca586df387edeaf2721e5323e3c7659ca1d57aff9de554\": RecentStats: unable to find data in memory cache]" Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.708560 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.709051 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.209026002 +0000 UTC m=+162.431204472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.709094 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.783176 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.809793 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.811212 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.811919 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.311899725 +0000 UTC m=+162.534078195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:09 crc kubenswrapper[4860]: W0121 21:11:09.821567 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82c01e58_4984_4ac3_951d_0f96fff19f57.slice/crio-30e3cc8fc30b4b295826331a6101b5ac79f5c291edb3b51696131529b5acebcd WatchSource:0}: Error finding container 30e3cc8fc30b4b295826331a6101b5ac79f5c291edb3b51696131529b5acebcd: Status 404 returned error can't find the container with id 30e3cc8fc30b4b295826331a6101b5ac79f5c291edb3b51696131529b5acebcd Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.824295 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q9n6j"] Jan 21 21:11:09 crc kubenswrapper[4860]: I0121 21:11:09.925530 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:09 crc kubenswrapper[4860]: E0121 21:11:09.926185 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.426164423 +0000 UTC m=+162.648342893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.027844 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.028261 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.52824931 +0000 UTC m=+162.750427780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.129518 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.129918 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.629894214 +0000 UTC m=+162.852072684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.157827 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" event={"ID":"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6","Type":"ContainerStarted","Data":"244a0b4fb67e1fcdf98aa2241e09f1f35152bf2af3bf51cc6db4c41d771b82e2"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.158537 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" event={"ID":"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1","Type":"ContainerStarted","Data":"9f42f32f3be32a99852f9fb7e19100f7899cf4930098d73e8fa0041a3ca43970"} Jan 21 21:11:10 crc kubenswrapper[4860]: W0121 21:11:10.171705 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb13868e_5322_4a98_b168_40a0a6bd8459.slice/crio-5a15135a7a2f8bda05be053f4e6206dcf8a7c4d3954121f2c7c146f8a54ea96e WatchSource:0}: Error finding container 5a15135a7a2f8bda05be053f4e6206dcf8a7c4d3954121f2c7c146f8a54ea96e: Status 404 returned error can't find the container with id 5a15135a7a2f8bda05be053f4e6206dcf8a7c4d3954121f2c7c146f8a54ea96e Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.188630 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" event={"ID":"08186c65-b069-4756-af19-5255a7a5fe2f","Type":"ContainerStarted","Data":"ad0bb7b15f5fe07bd5ca586df387edeaf2721e5323e3c7659ca1d57aff9de554"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.191730 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" event={"ID":"82c01e58-4984-4ac3-951d-0f96fff19f57","Type":"ContainerStarted","Data":"30e3cc8fc30b4b295826331a6101b5ac79f5c291edb3b51696131529b5acebcd"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.192728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" event={"ID":"32bee613-dd08-4612-936c-dd68b630651e","Type":"ContainerStarted","Data":"566faaa8a0771c703452685ccfed161facb92d549c6d66bd93a1b85505a622e3"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.193482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" event={"ID":"61cb972e-5da1-4381-9490-337000f6aa00","Type":"ContainerStarted","Data":"41d7b2c78340eec5a4e516ae15c79968d020aa4fb95f7dd66f71ebe4171698b3"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.194670 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" event={"ID":"d1fafd15-88be-43d0-b7f0-750b4c592352","Type":"ContainerStarted","Data":"36ebd462f46788980de72d222c6c9f02be5f189dd4d3a438b3309b66696365f5"} Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.195218 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.231253 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.231780 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.731754595 +0000 UTC m=+162.953933065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.333006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.333559 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.833500082 +0000 UTC m=+163.055678562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.333666 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.334329 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.834298216 +0000 UTC m=+163.056476686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.454388 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.454683 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.95462811 +0000 UTC m=+163.176806580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.455910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.456328 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:10.956307 +0000 UTC m=+163.178485470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.603721 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.604401 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.104375197 +0000 UTC m=+163.326553667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.683747 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.706217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.706801 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.206787355 +0000 UTC m=+163.428965825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: W0121 21:11:10.711205 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8477838_23bd_4bd8_8e37_bdf34bff841b.slice/crio-9877055eb89b1460ee05be827f76a41bdb6c10dd8e41495e631c709cbce15f6d WatchSource:0}: Error finding container 9877055eb89b1460ee05be827f76a41bdb6c10dd8e41495e631c709cbce15f6d: Status 404 returned error can't find the container with id 9877055eb89b1460ee05be827f76a41bdb6c10dd8e41495e631c709cbce15f6d Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.819914 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.821061 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.321029213 +0000 UTC m=+163.543207683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:10 crc kubenswrapper[4860]: I0121 21:11:10.950882 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:10 crc kubenswrapper[4860]: E0121 21:11:10.951201 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.451188645 +0000 UTC m=+163.673367115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.055715 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.056342 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.556314655 +0000 UTC m=+163.778493115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.187542 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.189410 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.689391965 +0000 UTC m=+163.911570435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.189821 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hv4bj"] Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.189903 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn"] Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.351269 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.351619 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.851587314 +0000 UTC m=+164.073765784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.351901 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.352388 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.852367177 +0000 UTC m=+164.074545647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.454558 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.454767 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.954731763 +0000 UTC m=+164.176910233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.454989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.455404 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:11.955387452 +0000 UTC m=+164.177565922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.500342 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" event={"ID":"3a3fc408-742d-46bb-93cd-05343faababf","Type":"ContainerStarted","Data":"873ac4891f5416d7efd0aa2d67b51cb488c563314e62dc1d5963a8050e90957d"} Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.535017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d6fh7" event={"ID":"c8477838-23bd-4bd8-8e37-bdf34bff841b","Type":"ContainerStarted","Data":"9877055eb89b1460ee05be827f76a41bdb6c10dd8e41495e631c709cbce15f6d"} Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.559950 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.560496 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.060462111 +0000 UTC m=+164.282640581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.629434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" event={"ID":"fb13868e-5322-4a98-b168-40a0a6bd8459","Type":"ContainerStarted","Data":"5a15135a7a2f8bda05be053f4e6206dcf8a7c4d3954121f2c7c146f8a54ea96e"} Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.646123 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nzkbt" podStartSLOduration=139.646101338 podStartE2EDuration="2m19.646101338s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:11.644866851 +0000 UTC m=+163.867045321" watchObservedRunningTime="2026-01-21 21:11:11.646101338 +0000 UTC m=+163.868279808" Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.661330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.661734 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.161720264 +0000 UTC m=+164.383898734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.730527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbh47" event={"ID":"235af04d-ef1a-4328-a0c4-aa6d5bc04b92","Type":"ContainerStarted","Data":"1c8a1d2c227df1380fea2314a63e605a4df9c91e7f905cd0069c17b406a74b90"} Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.735780 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v4hsh" event={"ID":"b88e1a68-3348-4ac7-b0b8-ba2215da118f","Type":"ContainerStarted","Data":"b9855a6b262aa4df4a66551f04b6fea7147bd5642072c495cd224153aea8048b"} Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.762160 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.762623 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.262604795 +0000 UTC m=+164.484783265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.812216 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nm4mt" podStartSLOduration=138.812187184 podStartE2EDuration="2m18.812187184s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:11.791140304 +0000 UTC m=+164.013318804" watchObservedRunningTime="2026-01-21 21:11:11.812187184 +0000 UTC m=+164.034365654" Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.863662 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.864015 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.364002942 +0000 UTC m=+164.586181412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.964483 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.964733 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.464700917 +0000 UTC m=+164.686879397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:11 crc kubenswrapper[4860]: I0121 21:11:11.965055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:11 crc kubenswrapper[4860]: E0121 21:11:11.965766 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.465755789 +0000 UTC m=+164.687934269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.066322 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.066880 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.566854896 +0000 UTC m=+164.789033366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.188688 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.189110 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.689098177 +0000 UTC m=+164.911276647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.252039 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-v4hsh" podStartSLOduration=139.252008273 podStartE2EDuration="2m19.252008273s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:11.814897136 +0000 UTC m=+164.037075606" watchObservedRunningTime="2026-01-21 21:11:12.252008273 +0000 UTC m=+164.474186763" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.273913 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452"] Jan 21 21:11:12 crc kubenswrapper[4860]: W0121 21:11:12.288786 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40070d0f_4d18_4d7c_a85a_cd2f904ea27a.slice/crio-7e4931a3b22deef98a1ffdc0650fe087f844cec022233901045a6c0baaded8da WatchSource:0}: Error finding container 7e4931a3b22deef98a1ffdc0650fe087f844cec022233901045a6c0baaded8da: Status 404 returned error can't find the container with id 7e4931a3b22deef98a1ffdc0650fe087f844cec022233901045a6c0baaded8da Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.290479 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.291247 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.291744 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.791724802 +0000 UTC m=+165.013903272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.311370 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jx5dt"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.335248 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.340456 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lcbjc"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.340792 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.397984 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.399441 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:12.899420879 +0000 UTC m=+165.121599349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.502758 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.503348 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.003329333 +0000 UTC m=+165.225507793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.605303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.605829 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.105811863 +0000 UTC m=+165.327990323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.624871 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.635351 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:12 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:12 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:12 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.635426 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.650710 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.709539 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.710105 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.210085716 +0000 UTC m=+165.432264186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.765243 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.771521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" event={"ID":"2e29e04b-89f7-4d77-8e17-0355493a1d9f","Type":"ContainerStarted","Data":"fec1220d403029447369d9f2b342cc3165a610e7f86a3cfbdf0a9b06770df953"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.774826 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-slx45"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.776582 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerStarted","Data":"35f840cd3a60cbd6c2b168baa8c664c563ae5b8e7c25769468cd4389bd569e71"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.777774 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" event={"ID":"70aea1b0-13b2-43ee-a77d-10c3143e4a95","Type":"ContainerStarted","Data":"835aa3af1a7a75c9574b01b123e0460d6b231417648cac762cfc7d5aa5ddf6cf"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.782565 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" event={"ID":"d81c2475-b36c-44d5-a7da-1bec8c5871b0","Type":"ContainerStarted","Data":"118427fd40de1a588a0d9ebf051e98b7a52254553d203e13a87ce4d2c9177633"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.784406 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" event={"ID":"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde","Type":"ContainerStarted","Data":"2d4331f1c9580bd17af60ae105141a7e41ec110edb34e05f95d237977e274084"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.786053 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" event={"ID":"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1","Type":"ContainerStarted","Data":"36924c1842314be88bfa57a5e209943e1fdbd2e12599736d32e7d88c05b0392a"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.786899 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.796561 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" event={"ID":"40070d0f-4d18-4d7c-a85a-cd2f904ea27a","Type":"ContainerStarted","Data":"7e4931a3b22deef98a1ffdc0650fe087f844cec022233901045a6c0baaded8da"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.807323 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcx72"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.811041 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.812182 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.312168814 +0000 UTC m=+165.534347284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.819734 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" event={"ID":"61cb972e-5da1-4381-9490-337000f6aa00","Type":"ContainerStarted","Data":"9d08698ea7cc9c4e50ab510ea238310553e7483a2449f3c74eab2033cfaccf55"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.823140 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" podStartSLOduration=139.823118837 podStartE2EDuration="2m19.823118837s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:12.809206704 +0000 UTC m=+165.031385174" watchObservedRunningTime="2026-01-21 21:11:12.823118837 +0000 UTC m=+165.045297297" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.829626 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" event={"ID":"402083eb-5844-4f8c-8dfa-067947a1bc48","Type":"ContainerStarted","Data":"34804c42ca3965cbdf95a5e9ce1e8db9afb447807a9480aadb0d1eb9009d1b8e"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.835862 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.843657 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.843699 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" event={"ID":"ed4c1784-1d20-4e8f-b8c9-ee3a641bf6c6","Type":"ContainerStarted","Data":"1d22f34fcff6eca8c42413564d8fd18ccc1b08c569c17e84ebeb9e8020a78911"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.852656 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.853753 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" event={"ID":"82c01e58-4984-4ac3-951d-0f96fff19f57","Type":"ContainerStarted","Data":"6150dd3cb0f0cc79f6e4b1cba2c676a4e43c109d7bbed87359509b31090e74f5"} Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.859082 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.860139 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r8wbl" podStartSLOduration=139.860094893 podStartE2EDuration="2m19.860094893s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:12.843704534 +0000 UTC m=+165.065883004" watchObservedRunningTime="2026-01-21 21:11:12.860094893 +0000 UTC m=+165.082273383" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.870673 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nd5p4" podStartSLOduration=140.870651604 podStartE2EDuration="2m20.870651604s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:12.870341045 +0000 UTC m=+165.092519515" watchObservedRunningTime="2026-01-21 21:11:12.870651604 +0000 UTC m=+165.092830074" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.925570 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4dq5s"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.932587 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:12 crc kubenswrapper[4860]: E0121 21:11:12.937153 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.437122567 +0000 UTC m=+165.659301037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.962247 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj"] Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.962985 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gnn4g" podStartSLOduration=139.962957194 podStartE2EDuration="2m19.962957194s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:12.913638333 +0000 UTC m=+165.135816803" watchObservedRunningTime="2026-01-21 21:11:12.962957194 +0000 UTC m=+165.185135674" Jan 21 21:11:12 crc kubenswrapper[4860]: I0121 21:11:12.990231 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.003323 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.022993 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.024040 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rkt4n"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.027767 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.043237 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cgwn6"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.045387 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b9252"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.045527 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s24bn"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.046243 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.046826 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.546807416 +0000 UTC m=+165.768985876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.047714 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn"] Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.058874 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q224d"] Jan 21 21:11:13 crc kubenswrapper[4860]: W0121 21:11:13.094167 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcb6184_d86e_4425_9c9c_99ec900dea78.slice/crio-9c051d2cbe2b3b5d62cf0121663de53accc6831dfc925d6519d8412b78c6b8ab WatchSource:0}: Error finding container 9c051d2cbe2b3b5d62cf0121663de53accc6831dfc925d6519d8412b78c6b8ab: Status 404 returned error can't find the container with id 9c051d2cbe2b3b5d62cf0121663de53accc6831dfc925d6519d8412b78c6b8ab Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.147283 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.147709 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.647689507 +0000 UTC m=+165.869867977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: W0121 21:11:13.216244 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b5214f_7427_49e9_a40e_2c295e1600d4.slice/crio-131ef37b83606f2c2c55e59cda455ff7e5dc5bf4407446006ace36ffaf5e7026 WatchSource:0}: Error finding container 131ef37b83606f2c2c55e59cda455ff7e5dc5bf4407446006ace36ffaf5e7026: Status 404 returned error can't find the container with id 131ef37b83606f2c2c55e59cda455ff7e5dc5bf4407446006ace36ffaf5e7026 Jan 21 21:11:13 crc kubenswrapper[4860]: W0121 21:11:13.216536 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b502a61_43c2_4e9c_b9a9_0e3b2f6bc8ac.slice/crio-1bfccb0d37834fb837be2875a793a0e7f5b17b6345843e5b42c63fcde6e93bb3 WatchSource:0}: Error finding container 1bfccb0d37834fb837be2875a793a0e7f5b17b6345843e5b42c63fcde6e93bb3: Status 404 returned error can't find the container with id 1bfccb0d37834fb837be2875a793a0e7f5b17b6345843e5b42c63fcde6e93bb3 Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.242174 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.250977 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.251744 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.751723714 +0000 UTC m=+165.973902184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.356456 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.356780 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.856762732 +0000 UTC m=+166.078941202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.457690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.458100 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:13.958087326 +0000 UTC m=+166.180265796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.592144 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.592423 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:14.092384604 +0000 UTC m=+166.314563074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.593032 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.593493 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:14.093479228 +0000 UTC m=+166.315657888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.649764 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:13 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:13 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:13 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.649858 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.697343 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.697919 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:14.197872545 +0000 UTC m=+166.420051015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: E0121 21:11:13.702434 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:14.202408684 +0000 UTC m=+166.424587154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:13 crc kubenswrapper[4860]: I0121 21:11:13.701927 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.215755 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.216288 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.216253121 +0000 UTC m=+167.438431591 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.355694 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.356312 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:14.856285735 +0000 UTC m=+167.078464205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.379031 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" event={"ID":"63cbab1e-f06a-4692-836f-3cdbb9260104","Type":"ContainerStarted","Data":"6cb1ac12cc53bafdf31ee2d320394b1fb2367210818875e16773edf37d270150"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.380174 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" event={"ID":"f9fa07de-d775-4c9b-af3e-03b39e6c33b6","Type":"ContainerStarted","Data":"590c9957d5e3c19c3a4114379dc8ddd94e00897f8da1b10c71e72b3813aa3e07"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.381632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d6fh7" event={"ID":"c8477838-23bd-4bd8-8e37-bdf34bff841b","Type":"ContainerStarted","Data":"3bc9bb31cdfb7d0329b3811aa88a2f9fdde37ddffd771037d7248b2bb0c29fd0"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.387959 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde" containerID="f970ef79f8e48e1ba73065392d2ed51a0ae4b684ea9e0ce67e88829841e48a1d" exitCode=0 Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.388045 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" event={"ID":"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde","Type":"ContainerDied","Data":"f970ef79f8e48e1ba73065392d2ed51a0ae4b684ea9e0ce67e88829841e48a1d"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.397266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" event={"ID":"fb13868e-5322-4a98-b168-40a0a6bd8459","Type":"ContainerStarted","Data":"207a7e402cd0ab58554a33033af98800de2807214661f77ceceae45b2e1308ba"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.398309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.399618 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" event={"ID":"84721999-239a-421e-a892-de0042ff1937","Type":"ContainerStarted","Data":"319d6c862f0e7d84fac430d4254cc9b6378b551a73ab593eecc870222bd19057"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.400610 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xxb4c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.400653 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.423348 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-d6fh7" podStartSLOduration=8.423311498 podStartE2EDuration="8.423311498s" podCreationTimestamp="2026-01-21 21:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:14.422077201 +0000 UTC m=+166.644255691" watchObservedRunningTime="2026-01-21 21:11:14.423311498 +0000 UTC m=+166.645489968" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.430778 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbh47" event={"ID":"235af04d-ef1a-4328-a0c4-aa6d5bc04b92","Type":"ContainerStarted","Data":"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.434309 4860 generic.go:334] "Generic (PLEG): container finished" podID="3a3fc408-742d-46bb-93cd-05343faababf" containerID="b5c738210e7294abea693c0efca076532c43b248e878a07788ac056a176752da" exitCode=0 Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.434596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" event={"ID":"3a3fc408-742d-46bb-93cd-05343faababf","Type":"ContainerDied","Data":"b5c738210e7294abea693c0efca076532c43b248e878a07788ac056a176752da"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.439316 4860 generic.go:334] "Generic (PLEG): container finished" podID="32bee613-dd08-4612-936c-dd68b630651e" containerID="e00b5569d959b5b6258bdd34805c069504bc82671aaa87266fdb7e5407da2877" exitCode=0 Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.439414 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" event={"ID":"32bee613-dd08-4612-936c-dd68b630651e","Type":"ContainerDied","Data":"e00b5569d959b5b6258bdd34805c069504bc82671aaa87266fdb7e5407da2877"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.525507 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.528569 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.028541491 +0000 UTC m=+167.250720151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.613203 4860 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fvk47 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.613835 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.622550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" event={"ID":"d1fafd15-88be-43d0-b7f0-750b4c592352","Type":"ContainerStarted","Data":"8e302fd9b576efe352f096883635071768d95448c6a3a15ffbc717925ce42a26"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.622605 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.630233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.635330 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:14 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:14 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:14 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.635410 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.635713 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.135685412 +0000 UTC m=+167.357863892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.637434 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.638612 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.138598361 +0000 UTC m=+167.360776831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.647550 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podStartSLOduration=141.647482202 podStartE2EDuration="2m21.647482202s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:14.618791539 +0000 UTC m=+166.840970009" watchObservedRunningTime="2026-01-21 21:11:14.647482202 +0000 UTC m=+166.869660672" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.680120 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" event={"ID":"751528b2-dccf-44a3-abc3-d044da642fd6","Type":"ContainerStarted","Data":"97872d404410e7784256ce1ae23c2d15dbc0db200b2d7ba8385d7328f846ba69"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.747386 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.748168 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.248053003 +0000 UTC m=+167.470231483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.748818 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.749363 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.249351743 +0000 UTC m=+167.471530393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.766650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" event={"ID":"35ea2f50-9645-4c72-85be-367a40e4a19e","Type":"ContainerStarted","Data":"d534017185479d0d8f510c562386c4512c2d0af7d36f61073d06c54019ebb380"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.776269 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" event={"ID":"5a070564-7a41-4207-b27f-d6ebddec9a55","Type":"ContainerStarted","Data":"f0e99ee236f592e0ca8f57c6a73ad96bee131df1e19873295a7ba0acf7f88416"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.783447 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hbh47" podStartSLOduration=142.78342433 podStartE2EDuration="2m22.78342433s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:14.776732456 +0000 UTC m=+166.998910936" watchObservedRunningTime="2026-01-21 21:11:14.78342433 +0000 UTC m=+167.005602800" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.809269 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" event={"ID":"70c3c027-6018-4182-bf8c-6309230608eb","Type":"ContainerStarted","Data":"34667afbc98e26773c6d5fc353d763390c48753acfa8978cb49c7332c5dc0518"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.837447 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" event={"ID":"265b2226-a08f-4ba0-b20a-25e422c21c37","Type":"ContainerStarted","Data":"8f64332ac6400683ca0b4305e3bc9a0a6a237de88da6aa6d0b5535fb2a8fc631"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.841141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" event={"ID":"971caae6-3ca9-4e02-852f-47abcf2bff31","Type":"ContainerStarted","Data":"ceac10bd10112aa9951eb2d035cafc047901b33a17d0ea986cc09ea986c4dd59"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.851336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.851644 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.351628466 +0000 UTC m=+167.573806936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.857236 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" event={"ID":"19b5214f-7427-49e9-a40e-2c295e1600d4","Type":"ContainerStarted","Data":"131ef37b83606f2c2c55e59cda455ff7e5dc5bf4407446006ace36ffaf5e7026"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.909099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" event={"ID":"b5933589-42d6-47af-b723-2af986d94c98","Type":"ContainerStarted","Data":"dc081ef1b0c4c92918256468ab2ad9a4b1d07586a944ef712a2ee69cc5edd27f"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.911333 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" podStartSLOduration=142.911315553 podStartE2EDuration="2m22.911315553s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:14.909691594 +0000 UTC m=+167.131870084" watchObservedRunningTime="2026-01-21 21:11:14.911315553 +0000 UTC m=+167.133494023" Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.916282 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" event={"ID":"baea563c-2833-407f-9cfb-571b93350be2","Type":"ContainerStarted","Data":"3c7964c6aec95df8c176ac8d06d54b52ebcc9966ffd8de8af978a218b6866c3c"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.924333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" event={"ID":"70aea1b0-13b2-43ee-a77d-10c3143e4a95","Type":"ContainerStarted","Data":"167d69e668d61ec5eeda511577428c01cb7592a5e89c263e1a0bba601642d5ad"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.953210 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:14 crc kubenswrapper[4860]: E0121 21:11:14.953570 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.453557099 +0000 UTC m=+167.675735569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.979164 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" event={"ID":"bfcb6184-d86e-4425-9c9c-99ec900dea78","Type":"ContainerStarted","Data":"9c051d2cbe2b3b5d62cf0121663de53accc6831dfc925d6519d8412b78c6b8ab"} Jan 21 21:11:14 crc kubenswrapper[4860]: I0121 21:11:14.980905 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4x452" podStartSLOduration=141.980836779 podStartE2EDuration="2m21.980836779s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:14.978847339 +0000 UTC m=+167.201025839" watchObservedRunningTime="2026-01-21 21:11:14.980836779 +0000 UTC m=+167.203015289" Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.016154 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerStarted","Data":"4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.019001 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.044516 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" event={"ID":"40070d0f-4d18-4d7c-a85a-cd2f904ea27a","Type":"ContainerStarted","Data":"abb791285858d27d922962670ad5c3a08ad22552e726b200d15d887a9bda0201"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.047452 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hv4bj" podStartSLOduration=143.047393405 podStartE2EDuration="2m23.047393405s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:15.0436055 +0000 UTC m=+167.265783960" watchObservedRunningTime="2026-01-21 21:11:15.047393405 +0000 UTC m=+167.269571885" Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.056347 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.057570 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.058133 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.058192 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.558159523 +0000 UTC m=+167.780338143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.062240 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" event={"ID":"08186c65-b069-4756-af19-5255a7a5fe2f","Type":"ContainerStarted","Data":"f2b03ee49d130c63ed033fe23993152f4c9946de3508947c6c5e37c3b2ec54f5"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.064963 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" event={"ID":"5c1cfe23-822a-462f-9db6-b4d87eae0d58","Type":"ContainerStarted","Data":"16de56d210c90a29977aee450db6ac26056da7aa9fbd2dd8fe7dfe2ade47453c"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.066418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-b9252" event={"ID":"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac","Type":"ContainerStarted","Data":"1bfccb0d37834fb837be2875a793a0e7f5b17b6345843e5b42c63fcde6e93bb3"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.067592 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s24bn" event={"ID":"11785eb3-a6cf-47e9-b902-3733703720ca","Type":"ContainerStarted","Data":"93e0c16eb8389db650171fb9e65ff96e27d941b2905bae188e3c7a8d35dd472a"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.068955 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4dq5s" event={"ID":"be1fd5b6-dccd-44e4-b38b-8c0ca448f013","Type":"ContainerStarted","Data":"8d10daf49ce5affb4b7e9885e5e5ed293021fd815a0ca367d188e5c74071a6ee"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.071033 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" event={"ID":"b56d611d-64a3-491f-b878-da0793846cef","Type":"ContainerStarted","Data":"d36d5c1986a813cbd1314e7ae1cd0145a50d88ab842917b5503aa19f7bc79486"} Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.158975 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.160177 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.660134927 +0000 UTC m=+167.882313397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.271849 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.275239 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.77520349 +0000 UTC m=+167.997381960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.374950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.375428 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.8754076 +0000 UTC m=+168.097586070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.476492 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.477279 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:15.9772523 +0000 UTC m=+168.199430770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.666299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.666725 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.166705537 +0000 UTC m=+168.388884007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.671346 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:15 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:15 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:15 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.671442 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.774643 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.776898 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.27685367 +0000 UTC m=+168.499032140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.777152 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.777535 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.277525561 +0000 UTC m=+168.499704031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.878256 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.878777 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.378749102 +0000 UTC m=+168.600927572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:15 crc kubenswrapper[4860]: I0121 21:11:15.879025 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:15 crc kubenswrapper[4860]: E0121 21:11:15.879592 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.379565417 +0000 UTC m=+168.601743887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.108481 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.109159 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.609117875 +0000 UTC m=+168.831296535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.203355 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" event={"ID":"d81c2475-b36c-44d5-a7da-1bec8c5871b0","Type":"ContainerStarted","Data":"065b1a71c00756ee7f2362deec7bbd51e8d862a3a98e9ea1dd0549e9908b4beb"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.210327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" event={"ID":"f9fa07de-d775-4c9b-af3e-03b39e6c33b6","Type":"ContainerStarted","Data":"21505ba8f901e56b85cba921a5f9133f6c641bd657ecf5b41fa2e47aa9ca2d2e"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.212381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.214228 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.714199234 +0000 UTC m=+168.936377704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.222269 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" event={"ID":"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8","Type":"ContainerStarted","Data":"eec062fd94797e7855dad62a8927d1d29b2c7338bfc094b6aaf4926e870005f8"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.228657 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" event={"ID":"751528b2-dccf-44a3-abc3-d044da642fd6","Type":"ContainerStarted","Data":"2ca856cda01d00e08ab4f26131072177798e3357211f1453f89d13ff4317dbee"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.259711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" event={"ID":"35ea2f50-9645-4c72-85be-367a40e4a19e","Type":"ContainerStarted","Data":"275068da969eec01543d7c0a0b54991f78fab92acd80736bf35d7df883baba80"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.265996 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t9nqj" podStartSLOduration=143.265959659 podStartE2EDuration="2m23.265959659s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:16.250590332 +0000 UTC m=+168.472768822" watchObservedRunningTime="2026-01-21 21:11:16.265959659 +0000 UTC m=+168.488138149" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.291167 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hs75g" podStartSLOduration=143.291125965 podStartE2EDuration="2m23.291125965s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:16.283691679 +0000 UTC m=+168.505870149" watchObservedRunningTime="2026-01-21 21:11:16.291125965 +0000 UTC m=+168.513304435" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.310789 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-slx45" podStartSLOduration=143.310762403 podStartE2EDuration="2m23.310762403s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:16.309749901 +0000 UTC m=+168.531928371" watchObservedRunningTime="2026-01-21 21:11:16.310762403 +0000 UTC m=+168.532940873" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.315164 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.315365 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.815345142 +0000 UTC m=+169.037523622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.316304 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.317927 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:16.817909731 +0000 UTC m=+169.040088201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.319004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" event={"ID":"70c3c027-6018-4182-bf8c-6309230608eb","Type":"ContainerStarted","Data":"98e085bab2ed495572fc71b1486e2489201c645134c462a158afc44af523e337"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.321361 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" event={"ID":"84721999-239a-421e-a892-de0042ff1937","Type":"ContainerStarted","Data":"0537c4b11f64e2a6967290b414b909e5d9ddd41a53aedb2778ba7013b541c0f2"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.327075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" event={"ID":"402083eb-5844-4f8c-8dfa-067947a1bc48","Type":"ContainerStarted","Data":"c8640390172ad838da05f1ab3ca59eb1907b944f66c2764376887839c1174cfc"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.327707 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.329285 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-stn5k container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.329320 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" podUID="402083eb-5844-4f8c-8dfa-067947a1bc48" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.329751 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" event={"ID":"c337e9fe-a7db-4b56-92c4-82905fb59d53","Type":"ContainerStarted","Data":"d051f8c6455ebdc868bf10307dcc672a81a317dd539f6b5b433b02b61b0e0bf5"} Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.333996 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.334049 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.335479 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.335798 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.346340 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" podStartSLOduration=144.346319065 podStartE2EDuration="2m24.346319065s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:16.344996765 +0000 UTC m=+168.567175235" watchObservedRunningTime="2026-01-21 21:11:16.346319065 +0000 UTC m=+168.568497535" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.557602 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" podStartSLOduration=143.557569846 podStartE2EDuration="2m23.557569846s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:16.555176593 +0000 UTC m=+168.777355073" watchObservedRunningTime="2026-01-21 21:11:16.557569846 +0000 UTC m=+168.779748336" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.559882 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.562226 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.062178646 +0000 UTC m=+169.284357286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.635910 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:16 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:16 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:16 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.635995 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.667727 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.670618 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.170593127 +0000 UTC m=+169.392771777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.772331 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.774196 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.274163369 +0000 UTC m=+169.496341839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:16 crc kubenswrapper[4860]: I0121 21:11:16.874497 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:16 crc kubenswrapper[4860]: E0121 21:11:16.875623 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.375606227 +0000 UTC m=+169.597784697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.046441 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.065563 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.565502985 +0000 UTC m=+169.787681445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.156826 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.157644 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.657630992 +0000 UTC m=+169.879809462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.258728 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.259257 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.759230405 +0000 UTC m=+169.981408875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.360091 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.360754 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.860730795 +0000 UTC m=+170.082909265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.383736 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" event={"ID":"f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde","Type":"ContainerStarted","Data":"01906ed5bf8ca542dbfe64f1032dfc7638a2f241219dd5ce09efccb34eec9d9f"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.385191 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.405260 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" event={"ID":"2e29e04b-89f7-4d77-8e17-0355493a1d9f","Type":"ContainerStarted","Data":"859c191e16399aa72414a677708898c2b698c5c3ab9a23d988b82e1c4ba92d72"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.422208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" event={"ID":"63cbab1e-f06a-4692-836f-3cdbb9260104","Type":"ContainerStarted","Data":"b048ccb95cec757ddc9493316e87db67a9c49b2655220d88758ad1a021577c8a"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.429624 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" podStartSLOduration=145.4295941 podStartE2EDuration="2m25.4295941s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:17.428127886 +0000 UTC m=+169.650306376" watchObservedRunningTime="2026-01-21 21:11:17.4295941 +0000 UTC m=+169.651772570" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.458553 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" event={"ID":"b56d611d-64a3-491f-b878-da0793846cef","Type":"ContainerStarted","Data":"51842aa5172b7b92265795ddfb6e4e38897e3e50271c3fcc136fea960b024554"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.463013 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.463590 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.466702 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:17.966652059 +0000 UTC m=+170.188830539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.476908 4860 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vz8ns container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.477035 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" podUID="b56d611d-64a3-491f-b878-da0793846cef" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.485591 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s24bn" event={"ID":"11785eb3-a6cf-47e9-b902-3733703720ca","Type":"ContainerStarted","Data":"6ca76ac456c1bce9798a71b299b5e6e15dc165739da2361f313aecd469afa3a2"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.490077 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tcx72" podStartSLOduration=144.490027311 podStartE2EDuration="2m24.490027311s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:17.477277612 +0000 UTC m=+169.699456092" watchObservedRunningTime="2026-01-21 21:11:17.490027311 +0000 UTC m=+169.712205811" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.528998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" event={"ID":"19b5214f-7427-49e9-a40e-2c295e1600d4","Type":"ContainerStarted","Data":"e0512be78caf24423c94284f76f0bf65d70994e578fd1034670ef1b078274928"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.545737 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" podStartSLOduration=144.545715516 podStartE2EDuration="2m24.545715516s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:17.543605682 +0000 UTC m=+169.765784152" watchObservedRunningTime="2026-01-21 21:11:17.545715516 +0000 UTC m=+169.767893986" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.553981 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" event={"ID":"08186c65-b069-4756-af19-5255a7a5fe2f","Type":"ContainerStarted","Data":"d24f3466eddf26230a582509178a1add0f7066966ee3c5a4cd215c896075d209"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.564219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4dq5s" event={"ID":"be1fd5b6-dccd-44e4-b38b-8c0ca448f013","Type":"ContainerStarted","Data":"bb0fbe296488614dd0dfdd453325bde8258f0e169005444818605abf2afdb8d9"} Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.576681 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.577691 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.590118 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.594860 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stn5k" Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.603527 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.103111863 +0000 UTC m=+170.325290333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.631315 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g8tw8" podStartSLOduration=145.63128995 podStartE2EDuration="2m25.63128995s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:17.627873927 +0000 UTC m=+169.850052407" watchObservedRunningTime="2026-01-21 21:11:17.63128995 +0000 UTC m=+169.853468420" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.636358 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:17 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:17 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:17 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.636498 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.696233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.697089 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.197061813 +0000 UTC m=+170.419240283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.736891 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4dq5s" podStartSLOduration=11.736872195 podStartE2EDuration="11.736872195s" podCreationTimestamp="2026-01-21 21:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:17.736279807 +0000 UTC m=+169.958458277" watchObservedRunningTime="2026-01-21 21:11:17.736872195 +0000 UTC m=+169.959050665" Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.800414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.800978 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.300958256 +0000 UTC m=+170.523136726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.902329 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.40229402 +0000 UTC m=+170.624472490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.902180 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:17 crc kubenswrapper[4860]: I0121 21:11:17.902857 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:17 crc kubenswrapper[4860]: E0121 21:11:17.903381 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.403373043 +0000 UTC m=+170.625551513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:18 crc kubenswrapper[4860]: I0121 21:11:18.006480 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:18 crc kubenswrapper[4860]: E0121 21:11:18.006687 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.506652657 +0000 UTC m=+170.728831127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:18 crc kubenswrapper[4860]: I0121 21:11:18.006895 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:18 crc kubenswrapper[4860]: E0121 21:11:18.007301 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.507280916 +0000 UTC m=+170.729459386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:18 crc kubenswrapper[4860]: I0121 21:11:18.115759 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:18 crc kubenswrapper[4860]: E0121 21:11:18.116778 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:18.616751318 +0000 UTC m=+170.838929788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.099011 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.099431 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.099692 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.099745 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.105444 4860 patch_prober.go:28] interesting pod/console-f9d7485db-hbh47 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.105547 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.106304 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:20.106241211 +0000 UTC m=+172.328419681 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.107459 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.108917 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.108979 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.109068 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.109083 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.124968 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:19 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:19 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:19 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.125091 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.155181 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ae05da-3403-4a2f-92f4-2ffa574a65a8-metrics-certs\") pod \"network-metrics-daemon-rrwcr\" (UID: \"60ae05da-3403-4a2f-92f4-2ffa574a65a8\") " pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.204387 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.206153 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:19.706130837 +0000 UTC m=+171.928309307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.254847 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" event={"ID":"19b5214f-7427-49e9-a40e-2c295e1600d4","Type":"ContainerStarted","Data":"c7d51e1de59807fa621e8a092a099496796bca5248b89fe61a5ef09acc1714c1"} Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.328686 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.329372 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:19.829336662 +0000 UTC m=+172.051515132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.329455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.334394 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:19.834360855 +0000 UTC m=+172.056539325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.373668 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vz8ns" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.389232 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" event={"ID":"32bee613-dd08-4612-936c-dd68b630651e","Type":"ContainerStarted","Data":"5a5092a1c13889b6046518280ae091ced81ab78e1ad5c655c8ebdb6a06f640c5"} Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.400232 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rrwcr" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.436131 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" event={"ID":"b5933589-42d6-47af-b723-2af986d94c98","Type":"ContainerStarted","Data":"4962b6e122d7c6b094f03d1e80c5c97ff74d13d1808146f7b0b4db262c501693"} Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.436832 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.437246 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:19.937225072 +0000 UTC m=+172.159403542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.960862 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.962606 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" event={"ID":"5a070564-7a41-4207-b27f-d6ebddec9a55","Type":"ContainerStarted","Data":"8699e7ecb37821d0c0e0bd5be584abe690392de448c85254916f7937360f41cf"} Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.963531 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.968182 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:19 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:19 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:19 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.968261 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:19 crc kubenswrapper[4860]: E0121 21:11:19.977193 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:20.977122314 +0000 UTC m=+173.199300824 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.995827 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ftls8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 21 21:11:19 crc kubenswrapper[4860]: I0121 21:11:19.995924 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" podUID="5a070564-7a41-4207-b27f-d6ebddec9a55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.048837 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" event={"ID":"265b2226-a08f-4ba0-b20a-25e422c21c37","Type":"ContainerStarted","Data":"aed5c2c61ddb462925bc9dfed9feb2fd1ccd40812643eafd08866eaf24ac350e"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.068399 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.071381 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:20.571359848 +0000 UTC m=+172.793538318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.107191 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" event={"ID":"2e29e04b-89f7-4d77-8e17-0355493a1d9f","Type":"ContainerStarted","Data":"2fa67932f36aa64a1b305bb81f692de350463f4ee3cd08bed2da978fdd5d46b5"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.168131 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" event={"ID":"baea563c-2833-407f-9cfb-571b93350be2","Type":"ContainerStarted","Data":"2e69aaccd5778a7550f58faa704b75bfd4d2115a5492de9b43ac1edbedd4d3e3"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.169423 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.169501 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.170865 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:20.670844652 +0000 UTC m=+172.893023122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.193267 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k7nfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.193356 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.212342 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" event={"ID":"bfcb6184-d86e-4425-9c9c-99ec900dea78","Type":"ContainerStarted","Data":"8e8dded1111122798b94c6a10f211b8371701ce00309dc50b63026e64e8ad6ec"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.275350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.275846 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:20.775823504 +0000 UTC m=+172.998001974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.287307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" event={"ID":"c337e9fe-a7db-4b56-92c4-82905fb59d53","Type":"ContainerStarted","Data":"b8152f3a7dc8bb0fc69e673a98f42849373f5c8cc5a2a9ca2ce90e719e9390e3"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.626478 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.627393 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:21.127330998 +0000 UTC m=+173.349509478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.655638 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:20 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:20 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:20 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.655740 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.670294 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" event={"ID":"84721999-239a-421e-a892-de0042ff1937","Type":"ContainerStarted","Data":"b1ac6fd43e10d58ee48aa936c4d299ee813b4596e42468d34d910e5d7ac97dfa"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.684186 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-b9252" event={"ID":"3b502a61-43c2-4e9c-b9a9-0e3b2f6bc8ac","Type":"ContainerStarted","Data":"509f31db681f85b50315dbad46ebf0190578a4464a753f363ab0c2a7637d5e17"} Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.806380 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.809569 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:21.309548698 +0000 UTC m=+173.531727168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.963291 4860 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwbfn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.963384 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" podUID="f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.963290 4860 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwbfn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.963522 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" podUID="f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.964309 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:20 crc kubenswrapper[4860]: E0121 21:11:20.964741 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:21.464723382 +0000 UTC m=+173.686901852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:20 crc kubenswrapper[4860]: I0121 21:11:20.980983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" event={"ID":"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8","Type":"ContainerStarted","Data":"be067418ca0ac03b273ba310cfae7de0c28cc2dd833995cfd15fae0975fe32d0"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.030890 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" event={"ID":"3a3fc408-742d-46bb-93cd-05343faababf","Type":"ContainerStarted","Data":"b4593fa7e276357b3b048eaae64511740e293d8223026865997af81088b1a38d"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.059510 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.060854 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.062306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" event={"ID":"5c1cfe23-822a-462f-9db6-b4d87eae0d58","Type":"ContainerStarted","Data":"4399ea6b33229bbf86be63ddabd774e9ea940149cffdee9ea0222d21cee70928"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.066182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:21 crc kubenswrapper[4860]: E0121 21:11:21.066598 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:21.566581212 +0000 UTC m=+173.788759682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.087392 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" event={"ID":"40070d0f-4d18-4d7c-a85a-cd2f904ea27a","Type":"ContainerStarted","Data":"a37834ffcdde3e4e70b870715014fba07af50b1e4208bc70b7813adbf49889cd"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.371699 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.465877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.467207 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbsmz\" (UniqueName: \"kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.467310 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.467411 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: E0121 21:11:21.469173 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:21.969122686 +0000 UTC m=+174.191301156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.477249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" event={"ID":"d81c2475-b36c-44d5-a7da-1bec8c5871b0","Type":"ContainerStarted","Data":"f64c6b8e081dd43e2de4b24de0dfa7ed8aedd23e5f821e541790fc58a61c6d38"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.478503 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.511837 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" event={"ID":"971caae6-3ca9-4e02-852f-47abcf2bff31","Type":"ContainerStarted","Data":"d38458559ccad715c59bb9ea6863f5d6e9b160de40d2456f1f619fcda4ff8192"} Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.515199 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.518390 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.529142 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.533116 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.535441 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.549326 4860 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwbfn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.549491 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" podUID="f2c8fe33-70a7-450a-9fb8-3c2c5dddbdde" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.550412 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.551063 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.552726 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.564716 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.569068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbsmz\" (UniqueName: \"kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.569125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.569194 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.569886 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.569955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.570403 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: E0121 21:11:21.573059 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.072909416 +0000 UTC m=+174.295088086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.575992 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.637878 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.641595 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:21 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:21 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:21 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.641663 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.661562 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbsmz\" (UniqueName: \"kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz\") pod \"certified-operators-gzkdc\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.675999 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676279 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45rh\" (UniqueName: \"kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676349 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676411 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676443 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: E0121 21:11:21.676488 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.176451878 +0000 UTC m=+174.398630418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676806 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.676984 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.677082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.677107 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8j26\" (UniqueName: \"kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.677173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk958\" (UniqueName: \"kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.690772 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.808288 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809505 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809575 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8j26\" (UniqueName: \"kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk958\" (UniqueName: \"kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45rh\" (UniqueName: \"kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809800 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809886 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.809943 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.810580 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.810978 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.811280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:21 crc kubenswrapper[4860]: I0121 21:11:21.839100 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.891333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.892901 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:21.893796 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.393777572 +0000 UTC m=+174.615956042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.894331 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.896456 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.902243 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.914559 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.914912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.915025 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.915203 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pd7m\" (UniqueName: \"kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:21.915347 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.415326838 +0000 UTC m=+174.637505308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:21.916448 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.018348 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.018419 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pd7m\" (UniqueName: \"kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.018468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.018531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.019606 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.020909 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.520890561 +0000 UTC m=+174.743069031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.031642 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8j26\" (UniqueName: \"kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26\") pod \"community-operators-9dqdq\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.032072 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.037170 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45rh\" (UniqueName: \"kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh\") pod \"certified-operators-l87hr\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.090088 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk958\" (UniqueName: \"kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958\") pod \"community-operators-m2slz\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.121084 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.121746 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.621659699 +0000 UTC m=+174.843838169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.133002 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.134528 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.200447 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.237232 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.237611 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.737596249 +0000 UTC m=+174.959774719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.238392 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pd7m\" (UniqueName: \"kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m\") pod \"redhat-marketplace-z6kb9\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.238428 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.243756 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.353409 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.353469 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.353703 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.853666692 +0000 UTC m=+175.075845162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.354642 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.354706 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.354739 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvkv\" (UniqueName: \"kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.354768 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.355274 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.85525489 +0000 UTC m=+175.077433360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.370607 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.456003 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.456599 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.456711 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvkv\" (UniqueName: \"kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.456793 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.457437 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.457848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.458013 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:22.957982437 +0000 UTC m=+175.180160897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.559344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.560312 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:23.060294872 +0000 UTC m=+175.282473342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.575824 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.577554 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.599082 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.636105 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvkv\" (UniqueName: \"kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv\") pod \"redhat-marketplace-zh97n\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.653319 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:22 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:22 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:22 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.653651 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.709511 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.710122 4860 csr.go:261] certificate signing request csr-hb9km is approved, waiting to be issued Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.710185 4860 csr.go:257] certificate signing request csr-hb9km is issued Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.710271 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxcf\" (UniqueName: \"kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.710406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.710723 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.759137 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.759500 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:23.259471165 +0000 UTC m=+175.481649635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.813713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.813776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxcf\" (UniqueName: \"kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.813816 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.814812 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.815504 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.816209 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:22 crc kubenswrapper[4860]: E0121 21:11:22.816782 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:23.316760338 +0000 UTC m=+175.538938808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:22 crc kubenswrapper[4860]: I0121 21:11:22.824142 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.331445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxcf\" (UniqueName: \"kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf\") pod \"redhat-operators-ngmkj\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.340381 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.342496 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.350218 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.351243 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.351861 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.353099 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.353072294 +0000 UTC m=+176.575250894 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.369517 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.371841 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.439916 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" event={"ID":"3a3fc408-742d-46bb-93cd-05343faababf","Type":"ContainerStarted","Data":"fc629de563991a08d796c47732d4e017b9198e7a30594ed37b0a5851bb75987b"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.459858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.460823 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:23.960786893 +0000 UTC m=+176.182965363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.460920 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.460964 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.461123 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.461173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxnr\" (UniqueName: \"kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.463126 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:23.963105383 +0000 UTC m=+176.185283853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.469787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" event={"ID":"b5933589-42d6-47af-b723-2af986d94c98","Type":"ContainerStarted","Data":"825b369325df8ef22252433c1dff745f2bd0f9a26347685310b0ff0341658cc0"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.560681 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.560742 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.560780 4860 patch_prober.go:28] interesting pod/apiserver-76f77b778f-q9n6j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.30:8443/livez\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.561232 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" podUID="3a3fc408-742d-46bb-93cd-05343faababf" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/livez\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.561912 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.562398 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.062376695 +0000 UTC m=+176.284555165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.562447 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.562482 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxnr\" (UniqueName: \"kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.562597 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.562628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.569464 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.06944188 +0000 UTC m=+176.291620350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.570070 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.571900 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.593887 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rrwcr"] Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.626460 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" event={"ID":"c337e9fe-a7db-4b56-92c4-82905fb59d53","Type":"ContainerStarted","Data":"b73700d581dd9679f83cd70b63eb3044ff2d4a8ade19fe9f634af4c858965abb"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.665543 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.666071 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.166053672 +0000 UTC m=+176.388232142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.683762 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" event={"ID":"265b2226-a08f-4ba0-b20a-25e422c21c37","Type":"ContainerStarted","Data":"68e442f3e04d6e7aae827cc271f8c6815a2e45b942f8338470e3fd887fd138e8"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.689060 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:23 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:23 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:23 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.689113 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.693154 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.698637 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s24bn" event={"ID":"11785eb3-a6cf-47e9-b902-3733703720ca","Type":"ContainerStarted","Data":"9291ff2327e949627853945b03acc7df3dee76775e27f9cf66b81d8fdf454832"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.698673 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.717312 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxnr\" (UniqueName: \"kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr\") pod \"redhat-operators-9rgh9\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.814448 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" event={"ID":"ecb5870e-f9cf-4b70-ac31-4d62d2902bf8","Type":"ContainerStarted","Data":"ff23af622b5b9ae67b33836d027f9ff3ecfa56b0a753c5340303466b0020f73e"} Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.817349 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.820722 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 21:06:22 +0000 UTC, rotation deadline is 2026-11-06 20:48:39.64789617 +0000 UTC Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.820759 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6935h37m15.827140019s for next certificate rotation Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.821161 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.321142842 +0000 UTC m=+176.543321312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.831892 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k7nfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.831991 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.844108 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.866889 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" podStartSLOduration=150.866845724 podStartE2EDuration="2m30.866845724s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:23.863403419 +0000 UTC m=+176.085581909" watchObservedRunningTime="2026-01-21 21:11:23.866845724 +0000 UTC m=+176.089024194" Jan 21 21:11:23 crc kubenswrapper[4860]: I0121 21:11:23.935336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:23 crc kubenswrapper[4860]: E0121 21:11:23.946966 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.446903571 +0000 UTC m=+176.669082241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.026106 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.094782 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.097244 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.597186996 +0000 UTC m=+176.819365466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.196364 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.198107 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.698070057 +0000 UTC m=+176.920248527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.315390 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.316271 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.816239504 +0000 UTC m=+177.038417974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.327868 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwbfn" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.416413 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.416849 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:24.916830306 +0000 UTC m=+177.139008766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.521253 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.522092 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.022077319 +0000 UTC m=+177.244255789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.626826 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.627247 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.127229021 +0000 UTC m=+177.349407491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.731253 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.731862 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.231841334 +0000 UTC m=+177.454019794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.799679 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-s24bn" podStartSLOduration=18.799646439 podStartE2EDuration="18.799646439s" podCreationTimestamp="2026-01-21 21:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:24.283346742 +0000 UTC m=+176.505525232" watchObservedRunningTime="2026-01-21 21:11:24.799646439 +0000 UTC m=+177.021824909" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.803170 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.826273 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:24 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:24 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:24 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.826356 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.826405 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ftls8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.826521 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" podUID="5a070564-7a41-4207-b27f-d6ebddec9a55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.849365 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.854988 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.354915841 +0000 UTC m=+177.577094311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.874787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:24 crc kubenswrapper[4860]: E0121 21:11:24.875551 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.375534919 +0000 UTC m=+177.597713389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.889429 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lcbjc" podStartSLOduration=151.88938958 podStartE2EDuration="2m31.88938958s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:24.88446047 +0000 UTC m=+177.106638950" watchObservedRunningTime="2026-01-21 21:11:24.88938958 +0000 UTC m=+177.111568050" Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.928512 4860 generic.go:334] "Generic (PLEG): container finished" podID="70c3c027-6018-4182-bf8c-6309230608eb" containerID="98e085bab2ed495572fc71b1486e2489201c645134c462a158afc44af523e337" exitCode=0 Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.928713 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" event={"ID":"70c3c027-6018-4182-bf8c-6309230608eb","Type":"ContainerDied","Data":"98e085bab2ed495572fc71b1486e2489201c645134c462a158afc44af523e337"} Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.942890 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" event={"ID":"60ae05da-3403-4a2f-92f4-2ffa574a65a8","Type":"ContainerStarted","Data":"ddc39efd0d0f36c14112d33aab67780b13eafe20c25f40ed74aa13b69a0965b5"} Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.946801 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k7nfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 21 21:11:24 crc kubenswrapper[4860]: I0121 21:11:24.946888 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.017443 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.020380 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.520342016 +0000 UTC m=+177.742520626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.120187 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.121229 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.621211527 +0000 UTC m=+177.843389997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.236924 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.237548 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.737514528 +0000 UTC m=+177.959692998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.339419 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.340026 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.840009287 +0000 UTC m=+178.062187757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.452771 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.454048 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:25.954015708 +0000 UTC m=+178.176194178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.522255 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" podStartSLOduration=153.522207934 podStartE2EDuration="2m33.522207934s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:25.292738678 +0000 UTC m=+177.514917158" watchObservedRunningTime="2026-01-21 21:11:25.522207934 +0000 UTC m=+177.744386404" Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.527712 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.557920 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.558420 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.058403835 +0000 UTC m=+178.280582305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.633857 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:25 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:25 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:25 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.633956 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.638152 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jx5dt" podStartSLOduration=152.638133622 podStartE2EDuration="2m32.638133622s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:25.536804618 +0000 UTC m=+177.758983088" watchObservedRunningTime="2026-01-21 21:11:25.638133622 +0000 UTC m=+177.860312102" Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.648872 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.659499 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.660278 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.160252516 +0000 UTC m=+178.382430996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.682815 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.779121 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.779660 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.27964315 +0000 UTC m=+178.501821620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.806110 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6scsc" podStartSLOduration=152.806078554 podStartE2EDuration="2m32.806078554s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:25.799039491 +0000 UTC m=+178.021217971" watchObservedRunningTime="2026-01-21 21:11:25.806078554 +0000 UTC m=+178.028257014" Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.882186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:25 crc kubenswrapper[4860]: E0121 21:11:25.883431 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.383405209 +0000 UTC m=+178.605583679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:25 crc kubenswrapper[4860]: I0121 21:11:25.898501 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:25.984223 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:25.986649 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:25.987206 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.487191458 +0000 UTC m=+178.709369928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.007811 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cgwn6" podStartSLOduration=153.007781434 podStartE2EDuration="2m33.007781434s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:26.006354202 +0000 UTC m=+178.228532672" watchObservedRunningTime="2026-01-21 21:11:26.007781434 +0000 UTC m=+178.229959904" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.012978 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.033824 4860 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.053363 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerStarted","Data":"19efae694f68181d86ce3d89348f13b1deada5710de0d20b482a4911c2fcf109"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.063128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerStarted","Data":"7dbfb2d0e8a210843fcefc935bac47fe884e62a474dd5846012b57516229b26a"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.087503 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" event={"ID":"bfcb6184-d86e-4425-9c9c-99ec900dea78","Type":"ContainerStarted","Data":"c6cdb8ad4b0f2dcecb127bd32f274d474d35b6341ce690c039a8f291fe1263e1"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.088144 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.088579 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.588553754 +0000 UTC m=+178.810732224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.118610 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" event={"ID":"60ae05da-3403-4a2f-92f4-2ffa574a65a8","Type":"ContainerStarted","Data":"197c58319543f66a83a5d113140c760fe735906c5cf224a94c8ae569087d5d66"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.143073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerStarted","Data":"a907df6b7c339dd2a27bc5c066f3a63aca09edbebe5efafb427cb9b27d667e29"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.170481 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerStarted","Data":"989806eae179705dba7fbbdfa9c7525b7b01c16da6db88bd079fdea9a35925ba"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.170597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerStarted","Data":"d3b70d219bc224cc60622f4e6c3c1eb5e8dd5081ffd68804c03913b88dcb00c6"} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.197858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.203079 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.703032179 +0000 UTC m=+178.925210649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.204205 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.208587 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dcw54" podStartSLOduration=153.208553527 podStartE2EDuration="2m33.208553527s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:26.098891308 +0000 UTC m=+178.321069798" watchObservedRunningTime="2026-01-21 21:11:26.208553527 +0000 UTC m=+178.430731997" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.300532 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.301520 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.801489135 +0000 UTC m=+179.023667605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.302239 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.315782 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.815755629 +0000 UTC m=+179.037934099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.333077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.390944 4860 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T21:11:26.033874689Z","Handler":null,"Name":""} Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.407635 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.408000 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:26.907979547 +0000 UTC m=+179.130158017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.504064 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pr2fp" podStartSLOduration=153.504036581 podStartE2EDuration="2m33.504036581s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:26.478173833 +0000 UTC m=+178.700352313" watchObservedRunningTime="2026-01-21 21:11:26.504036581 +0000 UTC m=+178.726215051" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.510062 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.510584 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:27.0105668 +0000 UTC m=+179.232745270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.612338 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.613111 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 21:11:27.11308646 +0000 UTC m=+179.335264930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.614148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: E0121 21:11:26.614793 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 21:11:27.114781333 +0000 UTC m=+179.336959803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nsjpv" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.632197 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:26 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:26 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:26 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.632271 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.662002 4860 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.662065 4860 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.716402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.727051 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.819364 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.824226 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-trsgn" podStartSLOduration=153.824210187 podStartE2EDuration="2m33.824210187s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:26.78915847 +0000 UTC m=+179.011336940" watchObservedRunningTime="2026-01-21 21:11:26.824210187 +0000 UTC m=+179.046388657" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.826680 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.827001 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:26 crc kubenswrapper[4860]: I0121 21:11:26.832120 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" podStartSLOduration=153.832090407 podStartE2EDuration="2m33.832090407s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:26.566978427 +0000 UTC m=+178.789156897" watchObservedRunningTime="2026-01-21 21:11:26.832090407 +0000 UTC m=+179.054268897" Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.318522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerStarted","Data":"9447a8b5eba07ae23ab47e97151bf151a93222d9fc8eb714949dd8ef31b29368"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.352720 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rrwcr" event={"ID":"60ae05da-3403-4a2f-92f4-2ffa574a65a8","Type":"ContainerStarted","Data":"f453a14da3bc5a8193018d61fc746ee148ef7096682d149a1b2aa23d36db853f"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.381138 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nsjpv\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.384713 4860 generic.go:334] "Generic (PLEG): container finished" podID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerID="085ca0b03d683d05b469df1401edff73085906a273bb1d5f2723419b8737cad4" exitCode=0 Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.384867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerDied","Data":"085ca0b03d683d05b469df1401edff73085906a273bb1d5f2723419b8737cad4"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.406666 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerStarted","Data":"33668f061e3a7d7f3520dbefb7f2fd8eb7df281d6440d1e898b9492880754a87"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.413142 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerStarted","Data":"3a418381e56aa97a219bbd5285a87baf8febedd034cbee4a453faffb2e7ea5e3"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.415436 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" event={"ID":"bfcb6184-d86e-4425-9c9c-99ec900dea78","Type":"ContainerStarted","Data":"b113c7dffc34c82f3af653442ba363909f222e5acb920367d6bac3adfb2ae5a3"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.420449 4860 generic.go:334] "Generic (PLEG): container finished" podID="adf72aac-c719-4347-824a-c033f4f3a240" containerID="6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55" exitCode=0 Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.420522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerDied","Data":"6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.426522 4860 generic.go:334] "Generic (PLEG): container finished" podID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerID="989806eae179705dba7fbbdfa9c7525b7b01c16da6db88bd079fdea9a35925ba" exitCode=0 Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.426734 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerDied","Data":"989806eae179705dba7fbbdfa9c7525b7b01c16da6db88bd079fdea9a35925ba"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.431733 4860 generic.go:334] "Generic (PLEG): container finished" podID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerID="feb6b85fed7542d666ccf71e8fc214698d13f740630bb0fd3b9d5ae3e0b63bb9" exitCode=0 Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.431803 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerDied","Data":"feb6b85fed7542d666ccf71e8fc214698d13f740630bb0fd3b9d5ae3e0b63bb9"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.434792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerStarted","Data":"97509cdd3c399d835da39d67052dd0926d985570657bd9b848c12417a142cc02"} Jan 21 21:11:27 crc kubenswrapper[4860]: I0121 21:11:27.812780 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-s24bn" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.813765 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.832065 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:28 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:28 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:28 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.832147 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.851833 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kc7kn" podStartSLOduration=154.851806892 podStartE2EDuration="2m34.851806892s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:27.259255498 +0000 UTC m=+179.481433968" watchObservedRunningTime="2026-01-21 21:11:27.851806892 +0000 UTC m=+180.073985362" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.874191 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4t7l" podStartSLOduration=154.874167134 podStartE2EDuration="2m34.874167134s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:27.852314667 +0000 UTC m=+180.074493147" watchObservedRunningTime="2026-01-21 21:11:27.874167134 +0000 UTC m=+180.096345604" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.881826 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.882772 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.894610 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.895219 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.910994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.911069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:27.922985 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.019627 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.019829 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.020053 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.020835 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" podStartSLOduration=155.020789981 podStartE2EDuration="2m35.020789981s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:28.017856112 +0000 UTC m=+180.240034592" watchObservedRunningTime="2026-01-21 21:11:28.020789981 +0000 UTC m=+180.242968451" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.112794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.180223 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q224d" podStartSLOduration=155.180196614 podStartE2EDuration="2m35.180196614s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:28.107501151 +0000 UTC m=+180.329679621" watchObservedRunningTime="2026-01-21 21:11:28.180196614 +0000 UTC m=+180.402375084" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.392860 4860 patch_prober.go:28] interesting pod/console-f9d7485db-hbh47 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.392981 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.432808 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-b9252" podStartSLOduration=155.432779043 podStartE2EDuration="2m35.432779043s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:28.246647007 +0000 UTC m=+180.468825477" watchObservedRunningTime="2026-01-21 21:11:28.432779043 +0000 UTC m=+180.654957503" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.549072 4860 generic.go:334] "Generic (PLEG): container finished" podID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerID="78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7" exitCode=0 Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.549636 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerDied","Data":"78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7"} Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.612291 4860 generic.go:334] "Generic (PLEG): container finished" podID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerID="7cfdeb424752ccd6efc6590ef947538480ba1681acfa81169d28673a38bbc24f" exitCode=0 Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.617104 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.617843 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerDied","Data":"7cfdeb424752ccd6efc6590ef947538480ba1681acfa81169d28673a38bbc24f"} Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.652502 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.652574 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.655188 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.655252 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.660070 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:28 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:28 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:28 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.660187 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.677223 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerID="154316144c4eda081c33af65b6799f96f157906c09049060a8b2728261762015" exitCode=0 Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.677385 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerDied","Data":"154316144c4eda081c33af65b6799f96f157906c09049060a8b2728261762015"} Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.955247 4860 generic.go:334] "Generic (PLEG): container finished" podID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerID="65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8" exitCode=0 Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.957617 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerDied","Data":"65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8"} Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.980923 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" event={"ID":"70c3c027-6018-4182-bf8c-6309230608eb","Type":"ContainerDied","Data":"34667afbc98e26773c6d5fc353d763390c48753acfa8978cb49c7332c5dc0518"} Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.981017 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34667afbc98e26773c6d5fc353d763390c48753acfa8978cb49c7332c5dc0518" Jan 21 21:11:28 crc kubenswrapper[4860]: I0121 21:11:28.982121 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.244760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume\") pod \"70c3c027-6018-4182-bf8c-6309230608eb\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.244869 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp6x9\" (UniqueName: \"kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9\") pod \"70c3c027-6018-4182-bf8c-6309230608eb\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.244913 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume\") pod \"70c3c027-6018-4182-bf8c-6309230608eb\" (UID: \"70c3c027-6018-4182-bf8c-6309230608eb\") " Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.667617 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume" (OuterVolumeSpecName: "config-volume") pod "70c3c027-6018-4182-bf8c-6309230608eb" (UID: "70c3c027-6018-4182-bf8c-6309230608eb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.701267 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70c3c027-6018-4182-bf8c-6309230608eb" (UID: "70c3c027-6018-4182-bf8c-6309230608eb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.701669 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:29 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:29 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:29 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.701737 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.706508 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9" (OuterVolumeSpecName: "kube-api-access-zp6x9") pod "70c3c027-6018-4182-bf8c-6309230608eb" (UID: "70c3c027-6018-4182-bf8c-6309230608eb"). InnerVolumeSpecName "kube-api-access-zp6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.768949 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftls8" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.769249 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.777203 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.777998 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.792735 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c3c027-6018-4182-bf8c-6309230608eb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.792844 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp6x9\" (UniqueName: \"kubernetes.io/projected/70c3c027-6018-4182-bf8c-6309230608eb-kube-api-access-zp6x9\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.792856 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70c3c027-6018-4182-bf8c-6309230608eb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.820397 4860 patch_prober.go:28] interesting pod/apiserver-76f77b778f-q9n6j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]log ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]etcd ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/max-in-flight-filter ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 21:11:29 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 21:11:29 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/openshift.io-startinformers ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 21:11:29 crc kubenswrapper[4860]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 21:11:29 crc kubenswrapper[4860]: livez check failed Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.820498 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" podUID="3a3fc408-742d-46bb-93cd-05343faababf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:29 crc kubenswrapper[4860]: I0121 21:11:29.822351 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.159568 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" event={"ID":"3ce6d0d8-ad17-4129-801d-508640c3419a","Type":"ContainerStarted","Data":"9b0847d461027b55b0a3f637033837506ac9a1a608bc61b462212425d7f7241a"} Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.274378 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp" Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.313229 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rrwcr" podStartSLOduration=158.313200813 podStartE2EDuration="2m38.313200813s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:30.306575162 +0000 UTC m=+182.528753642" watchObservedRunningTime="2026-01-21 21:11:30.313200813 +0000 UTC m=+182.535379283" Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.355609 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7vdnh" podStartSLOduration=158.355583663 podStartE2EDuration="2m38.355583663s" podCreationTimestamp="2026-01-21 21:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:30.350331233 +0000 UTC m=+182.572509703" watchObservedRunningTime="2026-01-21 21:11:30.355583663 +0000 UTC m=+182.577762133" Jan 21 21:11:30 crc kubenswrapper[4860]: E0121 21:11:30.474411 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c3c027_6018_4182_bf8c_6309230608eb.slice/crio-34667afbc98e26773c6d5fc353d763390c48753acfa8978cb49c7332c5dc0518\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c3c027_6018_4182_bf8c_6309230608eb.slice\": RecentStats: unable to find data in memory cache]" Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.765997 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:30 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:30 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:30 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:30 crc kubenswrapper[4860]: I0121 21:11:30.766480 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.008885 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" podStartSLOduration=25.00885444 podStartE2EDuration="25.00885444s" podCreationTimestamp="2026-01-21 21:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:30.991862952 +0000 UTC m=+183.214041442" watchObservedRunningTime="2026-01-21 21:11:31.00885444 +0000 UTC m=+183.231032910" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.321061 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.336004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rkt4n" event={"ID":"bfcb6184-d86e-4425-9c9c-99ec900dea78","Type":"ContainerStarted","Data":"ebe58eebe3920cabb07047c6e69ae80c711b0748b047f3264881c8ac729cd63f"} Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.338874 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" event={"ID":"3ce6d0d8-ad17-4129-801d-508640c3419a","Type":"ContainerStarted","Data":"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0"} Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.339594 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.432107 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 21:11:31 crc kubenswrapper[4860]: E0121 21:11:31.432655 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c3c027-6018-4182-bf8c-6309230608eb" containerName="collect-profiles" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.432692 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c3c027-6018-4182-bf8c-6309230608eb" containerName="collect-profiles" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.432866 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c3c027-6018-4182-bf8c-6309230608eb" containerName="collect-profiles" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.433574 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.440040 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.440363 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.463049 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" podStartSLOduration=158.463026715 podStartE2EDuration="2m38.463026715s" podCreationTimestamp="2026-01-21 21:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:31.424333307 +0000 UTC m=+183.646511787" watchObservedRunningTime="2026-01-21 21:11:31.463026715 +0000 UTC m=+183.685205195" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.464445 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.577275 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.577378 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.655850 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:31 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:31 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:31 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.655976 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.682760 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.683307 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.683389 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:31 crc kubenswrapper[4860]: I0121 21:11:31.980899 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.139208 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.139902 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.140052 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.468093 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc08bbd5-9ae0-4234-8482-90232f462aeb","Type":"ContainerStarted","Data":"716574c9b86847c2bd03cd72e5017f0b11bb7cc34fd3812eb7b6263c5774faeb"} Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.642480 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:32 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:32 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:32 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:32 crc kubenswrapper[4860]: I0121 21:11:32.642588 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.149923 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 21:11:33 crc kubenswrapper[4860]: W0121 21:11:33.307875 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3dda04b_2d31_41f6_a1e1_d82c644e8254.slice/crio-8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205 WatchSource:0}: Error finding container 8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205: Status 404 returned error can't find the container with id 8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205 Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.541267 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc08bbd5-9ae0-4234-8482-90232f462aeb","Type":"ContainerStarted","Data":"11d74025a107aebc346bec0c902cd182cab36edf18b34cdebb5c1b43b4d4a679"} Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.543717 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a3dda04b-2d31-41f6-a1e1-d82c644e8254","Type":"ContainerStarted","Data":"8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205"} Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.551774 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.567925 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.5678424159999995 podStartE2EDuration="6.567842416s" podCreationTimestamp="2026-01-21 21:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:33.559438021 +0000 UTC m=+185.781616491" watchObservedRunningTime="2026-01-21 21:11:33.567842416 +0000 UTC m=+185.790020886" Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.848648 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-q9n6j" Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.853604 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:33 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:33 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:33 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:33 crc kubenswrapper[4860]: I0121 21:11:33.879843 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:34 crc kubenswrapper[4860]: I0121 21:11:34.636687 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:34 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:34 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:34 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:34 crc kubenswrapper[4860]: I0121 21:11:34.637068 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:35 crc kubenswrapper[4860]: I0121 21:11:35.589713 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a3dda04b-2d31-41f6-a1e1-d82c644e8254","Type":"ContainerStarted","Data":"e107ec3e5a0856271c118be9b25b2ee5593959f3c26715ccd5225b478e27b2ce"} Jan 21 21:11:35 crc kubenswrapper[4860]: I0121 21:11:35.612160 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.612133936 podStartE2EDuration="4.612133936s" podCreationTimestamp="2026-01-21 21:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:11:35.61096006 +0000 UTC m=+187.833138540" watchObservedRunningTime="2026-01-21 21:11:35.612133936 +0000 UTC m=+187.834312416" Jan 21 21:11:35 crc kubenswrapper[4860]: I0121 21:11:35.636849 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:35 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:35 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:35 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:35 crc kubenswrapper[4860]: I0121 21:11:35.636986 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.610710 4860 generic.go:334] "Generic (PLEG): container finished" podID="a3dda04b-2d31-41f6-a1e1-d82c644e8254" containerID="e107ec3e5a0856271c118be9b25b2ee5593959f3c26715ccd5225b478e27b2ce" exitCode=0 Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.610795 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a3dda04b-2d31-41f6-a1e1-d82c644e8254","Type":"ContainerDied","Data":"e107ec3e5a0856271c118be9b25b2ee5593959f3c26715ccd5225b478e27b2ce"} Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.613014 4860 generic.go:334] "Generic (PLEG): container finished" podID="bc08bbd5-9ae0-4234-8482-90232f462aeb" containerID="11d74025a107aebc346bec0c902cd182cab36edf18b34cdebb5c1b43b4d4a679" exitCode=0 Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.613042 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc08bbd5-9ae0-4234-8482-90232f462aeb","Type":"ContainerDied","Data":"11d74025a107aebc346bec0c902cd182cab36edf18b34cdebb5c1b43b4d4a679"} Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.647003 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:36 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:36 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:36 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:36 crc kubenswrapper[4860]: I0121 21:11:36.647109 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:37 crc kubenswrapper[4860]: I0121 21:11:37.630886 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:37 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:37 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:37 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:37 crc kubenswrapper[4860]: I0121 21:11:37.631334 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.487834 4860 patch_prober.go:28] interesting pod/console-f9d7485db-hbh47 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.488007 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.612910 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.612997 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.613352 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.613370 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.623679 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.624462 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638"} pod="openshift-console/downloads-7954f5f757-hv4bj" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.624690 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" containerID="cri-o://4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638" gracePeriod=2 Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.625272 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.625290 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.937745 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:38 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:38 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:38 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:38 crc kubenswrapper[4860]: I0121 21:11:38.937835 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.156724 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.240273 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.345605 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir\") pod \"bc08bbd5-9ae0-4234-8482-90232f462aeb\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.345717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc08bbd5-9ae0-4234-8482-90232f462aeb" (UID: "bc08bbd5-9ae0-4234-8482-90232f462aeb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.345789 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access\") pod \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.346013 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir\") pod \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\" (UID: \"a3dda04b-2d31-41f6-a1e1-d82c644e8254\") " Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.346085 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access\") pod \"bc08bbd5-9ae0-4234-8482-90232f462aeb\" (UID: \"bc08bbd5-9ae0-4234-8482-90232f462aeb\") " Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.346713 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc08bbd5-9ae0-4234-8482-90232f462aeb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.347439 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3dda04b-2d31-41f6-a1e1-d82c644e8254" (UID: "a3dda04b-2d31-41f6-a1e1-d82c644e8254"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.354378 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc08bbd5-9ae0-4234-8482-90232f462aeb" (UID: "bc08bbd5-9ae0-4234-8482-90232f462aeb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.354593 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3dda04b-2d31-41f6-a1e1-d82c644e8254" (UID: "a3dda04b-2d31-41f6-a1e1-d82c644e8254"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.448207 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.448240 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dda04b-2d31-41f6-a1e1-d82c644e8254-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.448251 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc08bbd5-9ae0-4234-8482-90232f462aeb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.664103 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:39 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:39 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:39 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.664886 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.718283 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc08bbd5-9ae0-4234-8482-90232f462aeb","Type":"ContainerDied","Data":"716574c9b86847c2bd03cd72e5017f0b11bb7cc34fd3812eb7b6263c5774faeb"} Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.718345 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716574c9b86847c2bd03cd72e5017f0b11bb7cc34fd3812eb7b6263c5774faeb" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.718344 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.731230 4860 generic.go:334] "Generic (PLEG): container finished" podID="8445d936-5e91-4817-afda-a75203024c29" containerID="4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638" exitCode=0 Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.731388 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerDied","Data":"4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638"} Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.731425 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerStarted","Data":"b05e35c31cd7ea21d949a953249e281dea65b1b1c97779b2fe11dbf635fe3f69"} Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.732541 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.733170 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.733260 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.741111 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a3dda04b-2d31-41f6-a1e1-d82c644e8254","Type":"ContainerDied","Data":"8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205"} Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.741167 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d51c10ce2809251f81425d2c9fe73ef312a40bb9580cbb92c26d82f801f5205" Jan 21 21:11:39 crc kubenswrapper[4860]: I0121 21:11:39.741230 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 21:11:40 crc kubenswrapper[4860]: I0121 21:11:40.823218 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:40 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:40 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:40 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:40 crc kubenswrapper[4860]: I0121 21:11:40.824183 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:40 crc kubenswrapper[4860]: I0121 21:11:40.838222 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:40 crc kubenswrapper[4860]: I0121 21:11:40.838770 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:41 crc kubenswrapper[4860]: I0121 21:11:41.627577 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:41 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:41 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:41 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:41 crc kubenswrapper[4860]: I0121 21:11:41.628176 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:42 crc kubenswrapper[4860]: I0121 21:11:42.628705 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:42 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:42 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:42 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:42 crc kubenswrapper[4860]: I0121 21:11:42.629163 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:43 crc kubenswrapper[4860]: I0121 21:11:43.628805 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:43 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:43 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:43 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:43 crc kubenswrapper[4860]: I0121 21:11:43.629378 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:44 crc kubenswrapper[4860]: I0121 21:11:44.628395 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:44 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:44 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:44 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:44 crc kubenswrapper[4860]: I0121 21:11:44.628785 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:45 crc kubenswrapper[4860]: I0121 21:11:45.656294 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:45 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:45 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:45 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:45 crc kubenswrapper[4860]: I0121 21:11:45.656474 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:46 crc kubenswrapper[4860]: I0121 21:11:46.628169 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:46 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:46 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:46 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:46 crc kubenswrapper[4860]: I0121 21:11:46.628385 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:47 crc kubenswrapper[4860]: I0121 21:11:47.627784 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:47 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:47 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:47 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:47 crc kubenswrapper[4860]: I0121 21:11:47.628085 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:47 crc kubenswrapper[4860]: I0121 21:11:47.821706 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.353884 4860 patch_prober.go:28] interesting pod/console-f9d7485db-hbh47 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.354020 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761464 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761505 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761585 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761519 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761849 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:48 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:48 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:48 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.761869 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:48 crc kubenswrapper[4860]: I0121 21:11:48.773726 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncbcn" Jan 21 21:11:49 crc kubenswrapper[4860]: I0121 21:11:49.642378 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:49 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:49 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:49 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:49 crc kubenswrapper[4860]: I0121 21:11:49.642755 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:50 crc kubenswrapper[4860]: I0121 21:11:50.900610 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:50 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:50 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:50 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:50 crc kubenswrapper[4860]: I0121 21:11:50.900693 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:51 crc kubenswrapper[4860]: I0121 21:11:51.628017 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:51 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:51 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:51 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:51 crc kubenswrapper[4860]: I0121 21:11:51.628116 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:53 crc kubenswrapper[4860]: I0121 21:11:53.155835 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:53 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:53 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:53 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:53 crc kubenswrapper[4860]: I0121 21:11:53.155907 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:53 crc kubenswrapper[4860]: I0121 21:11:53.628248 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:53 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:53 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:53 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:53 crc kubenswrapper[4860]: I0121 21:11:53.628350 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.628270 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:54 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:54 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:54 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.628760 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.776277 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.776739 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" containerID="cri-o://207a7e402cd0ab58554a33033af98800de2807214661f77ceceae45b2e1308ba" gracePeriod=30 Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.864004 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:11:54 crc kubenswrapper[4860]: I0121 21:11:54.864304 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" containerID="cri-o://36924c1842314be88bfa57a5e209943e1fdbd2e12599736d32e7d88c05b0392a" gracePeriod=30 Jan 21 21:11:55 crc kubenswrapper[4860]: I0121 21:11:55.627642 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:55 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:55 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:55 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:55 crc kubenswrapper[4860]: I0121 21:11:55.627778 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.385487 4860 generic.go:334] "Generic (PLEG): container finished" podID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerID="36924c1842314be88bfa57a5e209943e1fdbd2e12599736d32e7d88c05b0392a" exitCode=0 Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.385591 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" event={"ID":"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1","Type":"ContainerDied","Data":"36924c1842314be88bfa57a5e209943e1fdbd2e12599736d32e7d88c05b0392a"} Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.388405 4860 generic.go:334] "Generic (PLEG): container finished" podID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerID="207a7e402cd0ab58554a33033af98800de2807214661f77ceceae45b2e1308ba" exitCode=0 Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.388471 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" event={"ID":"fb13868e-5322-4a98-b168-40a0a6bd8459","Type":"ContainerDied","Data":"207a7e402cd0ab58554a33033af98800de2807214661f77ceceae45b2e1308ba"} Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.627675 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:56 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:56 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:56 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:56 crc kubenswrapper[4860]: I0121 21:11:56.627798 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:57 crc kubenswrapper[4860]: I0121 21:11:57.627072 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 21:11:57 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 21 21:11:57 crc kubenswrapper[4860]: [+]process-running ok Jan 21 21:11:57 crc kubenswrapper[4860]: healthz check failed Jan 21 21:11:57 crc kubenswrapper[4860]: I0121 21:11:57.627194 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.250546 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dzzs7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.250871 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.337856 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xxb4c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.337944 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.352923 4860 patch_prober.go:28] interesting pod/console-f9d7485db-hbh47 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.353169 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.612880 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.613005 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.613028 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.613094 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.630371 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:11:58 crc kubenswrapper[4860]: I0121 21:11:58.634288 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-v4hsh" Jan 21 21:12:02 crc kubenswrapper[4860]: I0121 21:12:02.104051 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:12:02 crc kubenswrapper[4860]: I0121 21:12:02.104697 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:12:02 crc kubenswrapper[4860]: I0121 21:12:02.104803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:12:02 crc kubenswrapper[4860]: I0121 21:12:02.106155 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:12:02 crc kubenswrapper[4860]: I0121 21:12:02.106279 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6" gracePeriod=600 Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.595477 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 21:12:05 crc kubenswrapper[4860]: E0121 21:12:05.596337 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc08bbd5-9ae0-4234-8482-90232f462aeb" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.596361 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc08bbd5-9ae0-4234-8482-90232f462aeb" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: E0121 21:12:05.596386 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dda04b-2d31-41f6-a1e1-d82c644e8254" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.596394 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dda04b-2d31-41f6-a1e1-d82c644e8254" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.596644 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc08bbd5-9ae0-4234-8482-90232f462aeb" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.596673 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3dda04b-2d31-41f6-a1e1-d82c644e8254" containerName="pruner" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.597365 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.600706 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.600846 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.610611 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.612235 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.612407 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.713384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.713488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.713618 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.734332 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:05 crc kubenswrapper[4860]: I0121 21:12:05.923621 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.250791 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dzzs7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.250967 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.337824 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xxb4c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.337987 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.359888 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.366190 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.612123 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.612244 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.612843 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.612861 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.612893 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.613550 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b05e35c31cd7ea21d949a953249e281dea65b1b1c97779b2fe11dbf635fe3f69"} pod="openshift-console/downloads-7954f5f757-hv4bj" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.613593 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" containerID="cri-o://b05e35c31cd7ea21d949a953249e281dea65b1b1c97779b2fe11dbf635fe3f69" gracePeriod=2 Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.614089 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:08 crc kubenswrapper[4860]: I0121 21:12:08.614122 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:09 crc kubenswrapper[4860]: I0121 21:12:09.668562 4860 patch_prober.go:28] interesting pod/router-default-5444994796-v4hsh container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:12:09 crc kubenswrapper[4860]: I0121 21:12:09.669340 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-v4hsh" podUID="b88e1a68-3348-4ac7-b0b8-ba2215da118f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.204266 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.208850 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.209047 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.213738 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.213823 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.213898 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.315109 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.315619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.315704 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.315973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.316042 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.899124 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access\") pod \"installer-9-crc\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.918297 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6" exitCode=0 Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.918451 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6"} Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.921642 4860 generic.go:334] "Generic (PLEG): container finished" podID="8445d936-5e91-4817-afda-a75203024c29" containerID="b05e35c31cd7ea21d949a953249e281dea65b1b1c97779b2fe11dbf635fe3f69" exitCode=0 Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.921685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerDied","Data":"b05e35c31cd7ea21d949a953249e281dea65b1b1c97779b2fe11dbf635fe3f69"} Jan 21 21:12:12 crc kubenswrapper[4860]: I0121 21:12:12.921776 4860 scope.go:117] "RemoveContainer" containerID="4a33e3713c8b17ee3d8bd0aff63c7aa597cebade5d96a9bdecf1b748e8b3d638" Jan 21 21:12:13 crc kubenswrapper[4860]: I0121 21:12:13.148511 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:12:18 crc kubenswrapper[4860]: I0121 21:12:18.612710 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:18 crc kubenswrapper[4860]: I0121 21:12:18.614584 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.149895 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.190316 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:19 crc kubenswrapper[4860]: E0121 21:12:19.190601 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.190616 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.191581 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.194571 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.196138 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.221577 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config\") pod \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.221726 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert\") pod \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.221821 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx42c\" (UniqueName: \"kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c\") pod \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.221960 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca\") pod \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\" (UID: \"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1\") " Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.224661 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" (UID: "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.224834 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config" (OuterVolumeSpecName: "config") pod "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" (UID: "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.230568 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c" (OuterVolumeSpecName: "kube-api-access-wx42c") pod "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" (UID: "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1"). InnerVolumeSpecName "kube-api-access-wx42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.230909 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" (UID: "56f4a1c5-7451-4e6e-bdde-0fde5f2368c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.251257 4860 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dzzs7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.251342 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324262 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324355 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324503 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kmh\" (UniqueName: \"kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324573 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324587 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324600 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.324613 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx42c\" (UniqueName: \"kubernetes.io/projected/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1-kube-api-access-wx42c\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.337738 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xxb4c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: i/o timeout" start-of-body= Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.337818 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: i/o timeout" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.426003 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kmh\" (UniqueName: \"kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.426053 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.426097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.426131 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.427494 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.427638 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.431193 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.443190 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kmh\" (UniqueName: \"kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh\") pod \"route-controller-manager-b7bf799db-rvwfx\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.527365 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.973901 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" event={"ID":"56f4a1c5-7451-4e6e-bdde-0fde5f2368c1","Type":"ContainerDied","Data":"9f42f32f3be32a99852f9fb7e19100f7899cf4930098d73e8fa0041a3ca43970"} Jan 21 21:12:19 crc kubenswrapper[4860]: I0121 21:12:19.974089 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7" Jan 21 21:12:20 crc kubenswrapper[4860]: I0121 21:12:20.021146 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:12:20 crc kubenswrapper[4860]: I0121 21:12:20.024702 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dzzs7"] Jan 21 21:12:20 crc kubenswrapper[4860]: I0121 21:12:20.587466 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f4a1c5-7451-4e6e-bdde-0fde5f2368c1" path="/var/lib/kubelet/pods/56f4a1c5-7451-4e6e-bdde-0fde5f2368c1/volumes" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.111673 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.121848 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.122306 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f45rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l87hr_openshift-marketplace(c599eaed-fddf-4591-a474-f8c85a5470ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.122800 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.123005 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbsmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gzkdc_openshift-marketplace(dda00c6f-b112-49c0-bef6-aa2770a1c323): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.123636 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l87hr" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.124780 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.150313 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:24 crc kubenswrapper[4860]: E0121 21:12:24.150674 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.150691 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.150847 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" containerName="controller-manager" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.151338 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.154880 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.158782 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca\") pod \"fb13868e-5322-4a98-b168-40a0a6bd8459\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.158868 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert\") pod \"fb13868e-5322-4a98-b168-40a0a6bd8459\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159176 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkfnw\" (UniqueName: \"kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw\") pod \"fb13868e-5322-4a98-b168-40a0a6bd8459\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159240 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles\") pod \"fb13868e-5322-4a98-b168-40a0a6bd8459\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159265 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config\") pod \"fb13868e-5322-4a98-b168-40a0a6bd8459\" (UID: \"fb13868e-5322-4a98-b168-40a0a6bd8459\") " Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159532 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159569 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159707 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.159784 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvmsb\" (UniqueName: \"kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.160161 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb13868e-5322-4a98-b168-40a0a6bd8459" (UID: "fb13868e-5322-4a98-b168-40a0a6bd8459"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.160869 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fb13868e-5322-4a98-b168-40a0a6bd8459" (UID: "fb13868e-5322-4a98-b168-40a0a6bd8459"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.161785 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config" (OuterVolumeSpecName: "config") pod "fb13868e-5322-4a98-b168-40a0a6bd8459" (UID: "fb13868e-5322-4a98-b168-40a0a6bd8459"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.171844 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw" (OuterVolumeSpecName: "kube-api-access-qkfnw") pod "fb13868e-5322-4a98-b168-40a0a6bd8459" (UID: "fb13868e-5322-4a98-b168-40a0a6bd8459"). InnerVolumeSpecName "kube-api-access-qkfnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.175962 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb13868e-5322-4a98-b168-40a0a6bd8459" (UID: "fb13868e-5322-4a98-b168-40a0a6bd8459"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261427 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261526 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvmsb\" (UniqueName: \"kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261712 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261737 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261757 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13868e-5322-4a98-b168-40a0a6bd8459-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261775 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkfnw\" (UniqueName: \"kubernetes.io/projected/fb13868e-5322-4a98-b168-40a0a6bd8459-kube-api-access-qkfnw\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.261796 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb13868e-5322-4a98-b168-40a0a6bd8459-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.263272 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.265050 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.265503 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.272769 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.282720 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvmsb\" (UniqueName: \"kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb\") pod \"controller-manager-854d9bbbb-7x8ng\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:24 crc kubenswrapper[4860]: I0121 21:12:24.467184 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:25 crc kubenswrapper[4860]: I0121 21:12:25.011468 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" event={"ID":"fb13868e-5322-4a98-b168-40a0a6bd8459","Type":"ContainerDied","Data":"5a15135a7a2f8bda05be053f4e6206dcf8a7c4d3954121f2c7c146f8a54ea96e"} Jan 21 21:12:25 crc kubenswrapper[4860]: I0121 21:12:25.011811 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xxb4c" Jan 21 21:12:25 crc kubenswrapper[4860]: I0121 21:12:25.099045 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:12:25 crc kubenswrapper[4860]: I0121 21:12:25.108466 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xxb4c"] Jan 21 21:12:26 crc kubenswrapper[4860]: I0121 21:12:26.587578 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb13868e-5322-4a98-b168-40a0a6bd8459" path="/var/lib/kubelet/pods/fb13868e-5322-4a98-b168-40a0a6bd8459/volumes" Jan 21 21:12:28 crc kubenswrapper[4860]: E0121 21:12:28.129858 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l87hr" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" Jan 21 21:12:28 crc kubenswrapper[4860]: E0121 21:12:28.129847 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" Jan 21 21:12:28 crc kubenswrapper[4860]: E0121 21:12:28.211018 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 21:12:28 crc kubenswrapper[4860]: E0121 21:12:28.211504 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fxcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ngmkj_openshift-marketplace(ce35873b-5e42-4d33-9212-f78afae53fd0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:28 crc kubenswrapper[4860]: E0121 21:12:28.212851 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ngmkj" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" Jan 21 21:12:28 crc kubenswrapper[4860]: I0121 21:12:28.612192 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:28 crc kubenswrapper[4860]: I0121 21:12:28.612323 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:29 crc kubenswrapper[4860]: E0121 21:12:29.772511 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ngmkj" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" Jan 21 21:12:29 crc kubenswrapper[4860]: E0121 21:12:29.838445 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 21:12:29 crc kubenswrapper[4860]: E0121 21:12:29.838702 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pd7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z6kb9_openshift-marketplace(a21cacfb-049f-48d8-8c5d-4ad7ee333834): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:29 crc kubenswrapper[4860]: E0121 21:12:29.839881 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-z6kb9" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" Jan 21 21:12:30 crc kubenswrapper[4860]: I0121 21:12:30.648744 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.340475 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z6kb9" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.432358 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.432618 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bvkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zh97n_openshift-marketplace(6d731289-0564-4ea3-a2ea-c19c361c0d3e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.435831 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zh97n" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.444142 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.444312 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8j26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9dqdq_openshift-marketplace(f1a9e789-f7d5-4640-8ecf-4eef9aa31a48): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.445499 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9dqdq" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.456665 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.456872 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wk958,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m2slz_openshift-marketplace(adf72aac-c719-4347-824a-c033f4f3a240): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.459282 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m2slz" podUID="adf72aac-c719-4347-824a-c033f4f3a240" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.477781 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.477943 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckxnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9rgh9_openshift-marketplace(41129b4d-292c-46eb-807b-ed0c56b43c9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:12:32 crc kubenswrapper[4860]: E0121 21:12:32.479401 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9rgh9" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" Jan 21 21:12:32 crc kubenswrapper[4860]: I0121 21:12:32.526735 4860 scope.go:117] "RemoveContainer" containerID="36924c1842314be88bfa57a5e209943e1fdbd2e12599736d32e7d88c05b0392a" Jan 21 21:12:32 crc kubenswrapper[4860]: I0121 21:12:32.595718 4860 scope.go:117] "RemoveContainer" containerID="207a7e402cd0ab58554a33033af98800de2807214661f77ceceae45b2e1308ba" Jan 21 21:12:32 crc kubenswrapper[4860]: I0121 21:12:32.956725 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:32 crc kubenswrapper[4860]: W0121 21:12:32.960183 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306ec328_aa70_4ce3_86da_2d3c6d6687b5.slice/crio-3ba27bac2bc5111519ab8c4e21376cdef57f58c9cba38dfd0d47a0fb89e69c8f WatchSource:0}: Error finding container 3ba27bac2bc5111519ab8c4e21376cdef57f58c9cba38dfd0d47a0fb89e69c8f: Status 404 returned error can't find the container with id 3ba27bac2bc5111519ab8c4e21376cdef57f58c9cba38dfd0d47a0fb89e69c8f Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.072897 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" event={"ID":"306ec328-aa70-4ce3-86da-2d3c6d6687b5","Type":"ContainerStarted","Data":"3ba27bac2bc5111519ab8c4e21376cdef57f58c9cba38dfd0d47a0fb89e69c8f"} Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.080707 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.088190 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.102752 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.104440 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287"} Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.122685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hv4bj" event={"ID":"8445d936-5e91-4817-afda-a75203024c29","Type":"ContainerStarted","Data":"5befba3fa2fbc4ee26df86d4bcedfcae1cb1f4194f3215b7c415432857bf9b82"} Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.123862 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.124120 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:33 crc kubenswrapper[4860]: I0121 21:12:33.123994 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:12:33 crc kubenswrapper[4860]: W0121 21:12:33.127969 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19ae23c_6e08_419d_8a8c_c3dd56f97954.slice/crio-ded34d7efd1fd39ecfc19c277b7f666342e73acbfc5ee3906b15fc64e8a33079 WatchSource:0}: Error finding container ded34d7efd1fd39ecfc19c277b7f666342e73acbfc5ee3906b15fc64e8a33079: Status 404 returned error can't find the container with id ded34d7efd1fd39ecfc19c277b7f666342e73acbfc5ee3906b15fc64e8a33079 Jan 21 21:12:33 crc kubenswrapper[4860]: E0121 21:12:33.164264 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zh97n" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" Jan 21 21:12:33 crc kubenswrapper[4860]: E0121 21:12:33.164725 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m2slz" podUID="adf72aac-c719-4347-824a-c033f4f3a240" Jan 21 21:12:33 crc kubenswrapper[4860]: E0121 21:12:33.165415 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9dqdq" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" Jan 21 21:12:33 crc kubenswrapper[4860]: E0121 21:12:33.165455 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9rgh9" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.143416 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"517ce25f-4d56-4696-9b6a-eba3e518584c","Type":"ContainerStarted","Data":"f4bb6fc008da451907bfa9e63e25f75b8bcf4bfc5d97d3543513421a65cdc2e7"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.145071 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"517ce25f-4d56-4696-9b6a-eba3e518584c","Type":"ContainerStarted","Data":"bb77d14cc6561f39438de47eede9a259b8e98ed582ada2be066278d5a5b4c380"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.148446 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4b8f7599-5699-42cf-a872-4d517b948725","Type":"ContainerStarted","Data":"5973dd83d1d80d2aebf01e5954b202c24cc1bdc3fa0ba39a549b2e6240b53714"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.148840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4b8f7599-5699-42cf-a872-4d517b948725","Type":"ContainerStarted","Data":"7b0451be82f98ce8597559163666e1e3f66cf0e552303b71323b154f88024837"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.150592 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" event={"ID":"d19ae23c-6e08-419d-8a8c-c3dd56f97954","Type":"ContainerStarted","Data":"7fc0b0e848a2259de55b54ba1b3a44adb8875d1645f8906c89942826a2bca894"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.150643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" event={"ID":"d19ae23c-6e08-419d-8a8c-c3dd56f97954","Type":"ContainerStarted","Data":"ded34d7efd1fd39ecfc19c277b7f666342e73acbfc5ee3906b15fc64e8a33079"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.151092 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.152635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" event={"ID":"306ec328-aa70-4ce3-86da-2d3c6d6687b5","Type":"ContainerStarted","Data":"597fdc06c44f2475a454833e7c95769463d85c98c8b883e3bb553b115b570fe5"} Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.153359 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.153470 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.158006 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.169348 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=22.169289909 podStartE2EDuration="22.169289909s" podCreationTimestamp="2026-01-21 21:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:12:34.166665439 +0000 UTC m=+246.388843909" watchObservedRunningTime="2026-01-21 21:12:34.169289909 +0000 UTC m=+246.391468379" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.194114 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" podStartSLOduration=20.194090773 podStartE2EDuration="20.194090773s" podCreationTimestamp="2026-01-21 21:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:12:34.191373581 +0000 UTC m=+246.413552051" watchObservedRunningTime="2026-01-21 21:12:34.194090773 +0000 UTC m=+246.416269243" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.222525 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" podStartSLOduration=20.222489118 podStartE2EDuration="20.222489118s" podCreationTimestamp="2026-01-21 21:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:12:34.218337402 +0000 UTC m=+246.440515882" watchObservedRunningTime="2026-01-21 21:12:34.222489118 +0000 UTC m=+246.444667598" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.240737 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=29.240702493 podStartE2EDuration="29.240702493s" podCreationTimestamp="2026-01-21 21:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:12:34.238628119 +0000 UTC m=+246.460806589" watchObservedRunningTime="2026-01-21 21:12:34.240702493 +0000 UTC m=+246.462880973" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.468205 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:34 crc kubenswrapper[4860]: I0121 21:12:34.473000 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:35 crc kubenswrapper[4860]: I0121 21:12:35.160234 4860 generic.go:334] "Generic (PLEG): container finished" podID="4b8f7599-5699-42cf-a872-4d517b948725" containerID="5973dd83d1d80d2aebf01e5954b202c24cc1bdc3fa0ba39a549b2e6240b53714" exitCode=0 Jan 21 21:12:35 crc kubenswrapper[4860]: I0121 21:12:35.160284 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4b8f7599-5699-42cf-a872-4d517b948725","Type":"ContainerDied","Data":"5973dd83d1d80d2aebf01e5954b202c24cc1bdc3fa0ba39a549b2e6240b53714"} Jan 21 21:12:35 crc kubenswrapper[4860]: I0121 21:12:35.161524 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-hv4bj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 21:12:35 crc kubenswrapper[4860]: I0121 21:12:35.161712 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hv4bj" podUID="8445d936-5e91-4817-afda-a75203024c29" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.489327 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.525760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access\") pod \"4b8f7599-5699-42cf-a872-4d517b948725\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.525866 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir\") pod \"4b8f7599-5699-42cf-a872-4d517b948725\" (UID: \"4b8f7599-5699-42cf-a872-4d517b948725\") " Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.526362 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b8f7599-5699-42cf-a872-4d517b948725" (UID: "4b8f7599-5699-42cf-a872-4d517b948725"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.533251 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b8f7599-5699-42cf-a872-4d517b948725" (UID: "4b8f7599-5699-42cf-a872-4d517b948725"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.629538 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b8f7599-5699-42cf-a872-4d517b948725-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:36 crc kubenswrapper[4860]: I0121 21:12:36.629600 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8f7599-5699-42cf-a872-4d517b948725-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:37 crc kubenswrapper[4860]: I0121 21:12:37.171791 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4b8f7599-5699-42cf-a872-4d517b948725","Type":"ContainerDied","Data":"7b0451be82f98ce8597559163666e1e3f66cf0e552303b71323b154f88024837"} Jan 21 21:12:37 crc kubenswrapper[4860]: I0121 21:12:37.171832 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0451be82f98ce8597559163666e1e3f66cf0e552303b71323b154f88024837" Jan 21 21:12:37 crc kubenswrapper[4860]: I0121 21:12:37.171863 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 21:12:38 crc kubenswrapper[4860]: I0121 21:12:38.626682 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hv4bj" Jan 21 21:12:44 crc kubenswrapper[4860]: I0121 21:12:44.211199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerStarted","Data":"ff9ccf29e544762e6087e2e047187ef42010e13f462d96d7b6afa48d603ace68"} Jan 21 21:12:45 crc kubenswrapper[4860]: I0121 21:12:45.219225 4860 generic.go:334] "Generic (PLEG): container finished" podID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerID="ff9ccf29e544762e6087e2e047187ef42010e13f462d96d7b6afa48d603ace68" exitCode=0 Jan 21 21:12:45 crc kubenswrapper[4860]: I0121 21:12:45.219274 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerDied","Data":"ff9ccf29e544762e6087e2e047187ef42010e13f462d96d7b6afa48d603ace68"} Jan 21 21:12:54 crc kubenswrapper[4860]: I0121 21:12:54.631575 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:54 crc kubenswrapper[4860]: I0121 21:12:54.632488 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" podUID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" containerName="controller-manager" containerID="cri-o://597fdc06c44f2475a454833e7c95769463d85c98c8b883e3bb553b115b570fe5" gracePeriod=30 Jan 21 21:12:54 crc kubenswrapper[4860]: I0121 21:12:54.750728 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:54 crc kubenswrapper[4860]: I0121 21:12:54.751000 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" podUID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" containerName="route-controller-manager" containerID="cri-o://7fc0b0e848a2259de55b54ba1b3a44adb8875d1645f8906c89942826a2bca894" gracePeriod=30 Jan 21 21:12:55 crc kubenswrapper[4860]: I0121 21:12:55.712358 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerName="oauth-openshift" containerID="cri-o://8e302fd9b576efe352f096883635071768d95448c6a3a15ffbc717925ce42a26" gracePeriod=15 Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.291348 4860 generic.go:334] "Generic (PLEG): container finished" podID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" containerID="597fdc06c44f2475a454833e7c95769463d85c98c8b883e3bb553b115b570fe5" exitCode=0 Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.291421 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" event={"ID":"306ec328-aa70-4ce3-86da-2d3c6d6687b5","Type":"ContainerDied","Data":"597fdc06c44f2475a454833e7c95769463d85c98c8b883e3bb553b115b570fe5"} Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.293776 4860 generic.go:334] "Generic (PLEG): container finished" podID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" containerID="7fc0b0e848a2259de55b54ba1b3a44adb8875d1645f8906c89942826a2bca894" exitCode=0 Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.293892 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" event={"ID":"d19ae23c-6e08-419d-8a8c-c3dd56f97954","Type":"ContainerDied","Data":"7fc0b0e848a2259de55b54ba1b3a44adb8875d1645f8906c89942826a2bca894"} Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.296390 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerID="8e302fd9b576efe352f096883635071768d95448c6a3a15ffbc717925ce42a26" exitCode=0 Jan 21 21:12:56 crc kubenswrapper[4860]: I0121 21:12:56.296432 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" event={"ID":"d1fafd15-88be-43d0-b7f0-750b4c592352","Type":"ContainerDied","Data":"8e302fd9b576efe352f096883635071768d95448c6a3a15ffbc717925ce42a26"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.143740 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.180658 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:12:58 crc kubenswrapper[4860]: E0121 21:12:58.181521 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8f7599-5699-42cf-a872-4d517b948725" containerName="pruner" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.181552 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8f7599-5699-42cf-a872-4d517b948725" containerName="pruner" Jan 21 21:12:58 crc kubenswrapper[4860]: E0121 21:12:58.181581 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" containerName="route-controller-manager" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.181589 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" containerName="route-controller-manager" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.181770 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8f7599-5699-42cf-a872-4d517b948725" containerName="pruner" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.181795 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" containerName="route-controller-manager" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.182456 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.187404 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.190753 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9kmh\" (UniqueName: \"kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh\") pod \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.192735 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.192987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.193049 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkk94\" (UniqueName: \"kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.193108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.193144 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.201672 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.203094 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh" (OuterVolumeSpecName: "kube-api-access-r9kmh") pod "d19ae23c-6e08-419d-8a8c-c3dd56f97954" (UID: "d19ae23c-6e08-419d-8a8c-c3dd56f97954"). InnerVolumeSpecName "kube-api-access-r9kmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.203346 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.203624 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293492 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert\") pod \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293539 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert\") pod \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293568 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config\") pod \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293588 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293606 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293624 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293640 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvmsb\" (UniqueName: \"kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb\") pod \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293656 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca\") pod \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293676 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293702 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293723 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config\") pod \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293745 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca\") pod \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\" (UID: \"d19ae23c-6e08-419d-8a8c-c3dd56f97954\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lqpk\" (UniqueName: \"kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293788 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293809 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293855 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293904 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293925 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs\") pod \"d1fafd15-88be-43d0-b7f0-750b4c592352\" (UID: \"d1fafd15-88be-43d0-b7f0-750b4c592352\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.293957 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles\") pod \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\" (UID: \"306ec328-aa70-4ce3-86da-2d3c6d6687b5\") " Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkk94\" (UniqueName: \"kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294132 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294219 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294267 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9kmh\" (UniqueName: \"kubernetes.io/projected/d19ae23c-6e08-419d-8a8c-c3dd56f97954-kube-api-access-r9kmh\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.294279 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.295223 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.295896 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca" (OuterVolumeSpecName: "client-ca") pod "d19ae23c-6e08-419d-8a8c-c3dd56f97954" (UID: "d19ae23c-6e08-419d-8a8c-c3dd56f97954"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.295990 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config" (OuterVolumeSpecName: "config") pod "306ec328-aa70-4ce3-86da-2d3c6d6687b5" (UID: "306ec328-aa70-4ce3-86da-2d3c6d6687b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.296614 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.301881 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.301964 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.302381 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.305102 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "306ec328-aa70-4ce3-86da-2d3c6d6687b5" (UID: "306ec328-aa70-4ce3-86da-2d3c6d6687b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.306214 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "306ec328-aa70-4ce3-86da-2d3c6d6687b5" (UID: "306ec328-aa70-4ce3-86da-2d3c6d6687b5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.306487 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.307211 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "306ec328-aa70-4ce3-86da-2d3c6d6687b5" (UID: "306ec328-aa70-4ce3-86da-2d3c6d6687b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.309923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.310742 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.312716 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config" (OuterVolumeSpecName: "config") pod "d19ae23c-6e08-419d-8a8c-c3dd56f97954" (UID: "d19ae23c-6e08-419d-8a8c-c3dd56f97954"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.315611 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.318049 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.318349 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19ae23c-6e08-419d-8a8c-c3dd56f97954" (UID: "d19ae23c-6e08-419d-8a8c-c3dd56f97954"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.320163 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.325372 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.337837 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.344264 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb" (OuterVolumeSpecName: "kube-api-access-xvmsb") pod "306ec328-aa70-4ce3-86da-2d3c6d6687b5" (UID: "306ec328-aa70-4ce3-86da-2d3c6d6687b5"). InnerVolumeSpecName "kube-api-access-xvmsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.345531 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.349733 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk" (OuterVolumeSpecName: "kube-api-access-8lqpk") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "kube-api-access-8lqpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.349840 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkk94\" (UniqueName: \"kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94\") pod \"route-controller-manager-ddbd7fbcf-pjx9l\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.355816 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d1fafd15-88be-43d0-b7f0-750b4c592352" (UID: "d1fafd15-88be-43d0-b7f0-750b4c592352"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.356175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerStarted","Data":"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.361430 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" event={"ID":"d1fafd15-88be-43d0-b7f0-750b4c592352","Type":"ContainerDied","Data":"36ebd462f46788980de72d222c6c9f02be5f189dd4d3a438b3309b66696365f5"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.362276 4860 scope.go:117] "RemoveContainer" containerID="8e302fd9b576efe352f096883635071768d95448c6a3a15ffbc717925ce42a26" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.362833 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fvk47" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.373474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerStarted","Data":"b12a09d957ec59cca97e2731908ec775dff8fd8b6a5ad5673ee1fb57bdb897c1"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.377017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerStarted","Data":"c951dbd71470121fe3731102993ef5ca99c731cf2887e3cb52f16ecea1a8eb47"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.388797 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerStarted","Data":"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.391421 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" event={"ID":"d19ae23c-6e08-419d-8a8c-c3dd56f97954","Type":"ContainerDied","Data":"ded34d7efd1fd39ecfc19c277b7f666342e73acbfc5ee3906b15fc64e8a33079"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.391512 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.403376 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerStarted","Data":"d5f015bafb58829f24dcf1f2a4bba53e99d5d391c44f4c0768c5f75809553329"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.406479 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerStarted","Data":"806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.408393 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerStarted","Data":"c496bfdd97fbe3b2368d98283d7ccf6fe05c0ef4cf0ff75ed49f4cca3dd1db0d"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.412329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerStarted","Data":"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.413839 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" event={"ID":"306ec328-aa70-4ce3-86da-2d3c6d6687b5","Type":"ContainerDied","Data":"3ba27bac2bc5111519ab8c4e21376cdef57f58c9cba38dfd0d47a0fb89e69c8f"} Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.413920 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-854d9bbbb-7x8ng" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422602 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422630 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422644 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422660 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422670 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvmsb\" (UniqueName: \"kubernetes.io/projected/306ec328-aa70-4ce3-86da-2d3c6d6687b5-kube-api-access-xvmsb\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422680 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422689 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422701 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422711 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422722 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d19ae23c-6e08-419d-8a8c-c3dd56f97954-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422730 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422739 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lqpk\" (UniqueName: \"kubernetes.io/projected/d1fafd15-88be-43d0-b7f0-750b4c592352-kube-api-access-8lqpk\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422748 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422762 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422775 4860 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1fafd15-88be-43d0-b7f0-750b4c592352-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422786 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422798 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422809 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1fafd15-88be-43d0-b7f0-750b4c592352-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422820 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/306ec328-aa70-4ce3-86da-2d3c6d6687b5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422830 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19ae23c-6e08-419d-8a8c-c3dd56f97954-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.422838 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306ec328-aa70-4ce3-86da-2d3c6d6687b5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:12:58 crc kubenswrapper[4860]: I0121 21:12:58.555996 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.267500 4860 scope.go:117] "RemoveContainer" containerID="7fc0b0e848a2259de55b54ba1b3a44adb8875d1645f8906c89942826a2bca894" Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.363596 4860 scope.go:117] "RemoveContainer" containerID="597fdc06c44f2475a454833e7c95769463d85c98c8b883e3bb553b115b570fe5" Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.388155 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzkdc" podStartSLOduration=7.785637905 podStartE2EDuration="1m39.388119205s" podCreationTimestamp="2026-01-21 21:11:20 +0000 UTC" firstStartedPulling="2026-01-21 21:11:26.203486022 +0000 UTC m=+178.425664492" lastFinishedPulling="2026-01-21 21:12:57.805967322 +0000 UTC m=+270.028145792" observedRunningTime="2026-01-21 21:12:59.042267253 +0000 UTC m=+271.264445753" watchObservedRunningTime="2026-01-21 21:12:59.388119205 +0000 UTC m=+271.610297695" Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.391153 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.396313 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fvk47"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.401845 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.405154 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-854d9bbbb-7x8ng"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.512802 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.512895 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7bf799db-rvwfx"] Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.588403 4860 generic.go:334] "Generic (PLEG): container finished" podID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerID="c496bfdd97fbe3b2368d98283d7ccf6fe05c0ef4cf0ff75ed49f4cca3dd1db0d" exitCode=0 Jan 21 21:12:59 crc kubenswrapper[4860]: I0121 21:12:59.588477 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerDied","Data":"c496bfdd97fbe3b2368d98283d7ccf6fe05c0ef4cf0ff75ed49f4cca3dd1db0d"} Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.588067 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" path="/var/lib/kubelet/pods/306ec328-aa70-4ce3-86da-2d3c6d6687b5/volumes" Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.588998 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19ae23c-6e08-419d-8a8c-c3dd56f97954" path="/var/lib/kubelet/pods/d19ae23c-6e08-419d-8a8c-c3dd56f97954/volumes" Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.589841 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" path="/var/lib/kubelet/pods/d1fafd15-88be-43d0-b7f0-750b4c592352/volumes" Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.596779 4860 generic.go:334] "Generic (PLEG): container finished" podID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerID="b12a09d957ec59cca97e2731908ec775dff8fd8b6a5ad5673ee1fb57bdb897c1" exitCode=0 Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.596830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerDied","Data":"b12a09d957ec59cca97e2731908ec775dff8fd8b6a5ad5673ee1fb57bdb897c1"} Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.603789 4860 generic.go:334] "Generic (PLEG): container finished" podID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerID="144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66" exitCode=0 Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.603922 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerDied","Data":"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66"} Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.608906 4860 generic.go:334] "Generic (PLEG): container finished" podID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerID="891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a" exitCode=0 Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.608948 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerDied","Data":"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a"} Jan 21 21:13:00 crc kubenswrapper[4860]: I0121 21:13:00.703889 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.025915 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:13:01 crc kubenswrapper[4860]: E0121 21:13:01.026327 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerName="oauth-openshift" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.026351 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerName="oauth-openshift" Jan 21 21:13:01 crc kubenswrapper[4860]: E0121 21:13:01.026378 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" containerName="controller-manager" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.026387 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" containerName="controller-manager" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.026536 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fafd15-88be-43d0-b7f0-750b4c592352" containerName="oauth-openshift" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.026568 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="306ec328-aa70-4ce3-86da-2d3c6d6687b5" containerName="controller-manager" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.027294 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.031747 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.031987 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.032278 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.032737 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.035315 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.035466 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.038459 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.044852 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.108965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k68s\" (UniqueName: \"kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.109072 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.109108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.109222 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.109281 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234112 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k68s\" (UniqueName: \"kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234651 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234689 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234741 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234775 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.234842 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.240100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.240224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.237102 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.246299 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.247060 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.247096 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.249994 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.250627 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.250906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.251433 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.251916 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.253333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k68s\" (UniqueName: \"kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s\") pod \"controller-manager-7646f58b4-9d4qz\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.253967 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.261652 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.265874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.292113 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 21:13:01 crc kubenswrapper[4860]: W0121 21:13:01.353426 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf189d1a5_8e93_4d4d_b11d_29c60e3c3106.slice/crio-f8c4ca8a1ac3eb38a3d40310d30c3f45023b4928ce42741644371f69c766a823 WatchSource:0}: Error finding container f8c4ca8a1ac3eb38a3d40310d30c3f45023b4928ce42741644371f69c766a823: Status 404 returned error can't find the container with id f8c4ca8a1ac3eb38a3d40310d30c3f45023b4928ce42741644371f69c766a823 Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.413672 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.421076 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.701508 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.702548 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.834711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" event={"ID":"f189d1a5-8e93-4d4d-b11d-29c60e3c3106","Type":"ContainerStarted","Data":"f8c4ca8a1ac3eb38a3d40310d30c3f45023b4928ce42741644371f69c766a823"} Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.851721 4860 generic.go:334] "Generic (PLEG): container finished" podID="adf72aac-c719-4347-824a-c033f4f3a240" containerID="d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f" exitCode=0 Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.851826 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerDied","Data":"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f"} Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.873269 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:01 crc kubenswrapper[4860]: I0121 21:13:01.901200 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zh97n" podStartSLOduration=7.466955229 podStartE2EDuration="1m39.901177092s" podCreationTimestamp="2026-01-21 21:11:22 +0000 UTC" firstStartedPulling="2026-01-21 21:11:27.434196236 +0000 UTC m=+179.656374706" lastFinishedPulling="2026-01-21 21:12:59.868418099 +0000 UTC m=+272.090596569" observedRunningTime="2026-01-21 21:13:01.898654892 +0000 UTC m=+274.120833372" watchObservedRunningTime="2026-01-21 21:13:01.901177092 +0000 UTC m=+274.123355562" Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.822166 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.822592 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.879363 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" event={"ID":"f189d1a5-8e93-4d4d-b11d-29c60e3c3106","Type":"ContainerStarted","Data":"77f4ac959945ddafeedb41ddf7b6556236de13e3179a45a8da5e584d672a9e6d"} Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.879989 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.905169 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerStarted","Data":"7fa200e9fdb67b419359ca9a7acea43911dabcd0955dc4edcf79d45a70177866"} Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.938840 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerStarted","Data":"36a0cbb2f58913b4fa90484a241250cca40140df5a07cbcffa1da4e09d72faf2"} Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.951876 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerStarted","Data":"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c"} Jan 21 21:13:02 crc kubenswrapper[4860]: I0121 21:13:02.954454 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerStarted","Data":"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868"} Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.036779 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" podStartSLOduration=9.03675496 podStartE2EDuration="9.03675496s" podCreationTimestamp="2026-01-21 21:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:13:02.938555846 +0000 UTC m=+275.160734326" watchObservedRunningTime="2026-01-21 21:13:03.03675496 +0000 UTC m=+275.258933430" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.060032 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z6kb9" podStartSLOduration=8.287199261 podStartE2EDuration="1m42.060010002s" podCreationTimestamp="2026-01-21 21:11:21 +0000 UTC" firstStartedPulling="2026-01-21 21:11:27.441349363 +0000 UTC m=+179.663527833" lastFinishedPulling="2026-01-21 21:13:01.214160094 +0000 UTC m=+273.436338574" observedRunningTime="2026-01-21 21:13:03.040589156 +0000 UTC m=+275.262767626" watchObservedRunningTime="2026-01-21 21:13:03.060010002 +0000 UTC m=+275.282188472" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.063103 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-fk8j2"] Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.063840 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.071997 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.072033 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l87hr" podStartSLOduration=9.498249801 podStartE2EDuration="1m42.07200241s" podCreationTimestamp="2026-01-21 21:11:21 +0000 UTC" firstStartedPulling="2026-01-21 21:11:28.58381926 +0000 UTC m=+180.805997730" lastFinishedPulling="2026-01-21 21:13:01.157571869 +0000 UTC m=+273.379750339" observedRunningTime="2026-01-21 21:13:03.069283453 +0000 UTC m=+275.291461923" watchObservedRunningTime="2026-01-21 21:13:03.07200241 +0000 UTC m=+275.294180900" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.072231 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.072465 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.072524 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.072727 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.075222 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.075540 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.076147 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.076288 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.076540 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.076851 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.096371 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.099437 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.100354 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.103863 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-fk8j2"] Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.316614 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318490 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318645 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318694 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-dir\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318721 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6dw\" (UniqueName: \"kubernetes.io/projected/9ae29d0a-414f-4cc8-915c-7400988ae3e9-kube-api-access-8z6dw\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318768 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318893 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-policies\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318917 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.318977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.319108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.319241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.319320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.319349 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.376033 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9dqdq" podStartSLOduration=9.887854122 podStartE2EDuration="1m42.376007803s" podCreationTimestamp="2026-01-21 21:11:21 +0000 UTC" firstStartedPulling="2026-01-21 21:11:28.972510692 +0000 UTC m=+181.194689162" lastFinishedPulling="2026-01-21 21:13:01.460664373 +0000 UTC m=+273.682842843" observedRunningTime="2026-01-21 21:13:03.37004219 +0000 UTC m=+275.592220670" watchObservedRunningTime="2026-01-21 21:13:03.376007803 +0000 UTC m=+275.598186293" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.426690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.426946 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.427874 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.427914 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-policies\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.427962 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.427990 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428011 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428067 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428088 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428106 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428154 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428226 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-dir\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.428243 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z6dw\" (UniqueName: \"kubernetes.io/projected/9ae29d0a-414f-4cc8-915c-7400988ae3e9-kube-api-access-8z6dw\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.435259 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.437495 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-policies\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.437823 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.438079 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.438154 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ae29d0a-414f-4cc8-915c-7400988ae3e9-audit-dir\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.483497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.545008 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.545424 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.545887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z6dw\" (UniqueName: \"kubernetes.io/projected/9ae29d0a-414f-4cc8-915c-7400988ae3e9-kube-api-access-8z6dw\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.546510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.547464 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.594056 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.648997 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.649466 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ae29d0a-414f-4cc8-915c-7400988ae3e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-fk8j2\" (UID: \"9ae29d0a-414f-4cc8-915c-7400988ae3e9\") " pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.676191 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:03 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:03 crc kubenswrapper[4860]: > Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.727996 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.926671 4860 patch_prober.go:28] interesting pod/route-controller-manager-ddbd7fbcf-pjx9l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:13:03 crc kubenswrapper[4860]: I0121 21:13:03.926737 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 21:13:04 crc kubenswrapper[4860]: I0121 21:13:04.010550 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zh97n" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:04 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:04 crc kubenswrapper[4860]: > Jan 21 21:13:04 crc kubenswrapper[4860]: I0121 21:13:04.144921 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6e52409b9123e2cc753897d622dc914077177ac494b0b7826bd842a4be81532d"} Jan 21 21:13:04 crc kubenswrapper[4860]: I0121 21:13:04.155832 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:13:04 crc kubenswrapper[4860]: I0121 21:13:04.219886 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:13:04 crc kubenswrapper[4860]: W0121 21:13:04.240799 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665ba061_eec9_43db_83da_694c1e1e2cad.slice/crio-3aaebaad3d86600d4ebeec83eb5c73555f8082450dbded50c038e359655baf98 WatchSource:0}: Error finding container 3aaebaad3d86600d4ebeec83eb5c73555f8082450dbded50c038e359655baf98: Status 404 returned error can't find the container with id 3aaebaad3d86600d4ebeec83eb5c73555f8082450dbded50c038e359655baf98 Jan 21 21:13:04 crc kubenswrapper[4860]: W0121 21:13:04.364049 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8858749f33ce20555a9542526c75fc84649c615566de8430e02b1fc0fb52492a WatchSource:0}: Error finding container 8858749f33ce20555a9542526c75fc84649c615566de8430e02b1fc0fb52492a: Status 404 returned error can't find the container with id 8858749f33ce20555a9542526c75fc84649c615566de8430e02b1fc0fb52492a Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.233266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8858749f33ce20555a9542526c75fc84649c615566de8430e02b1fc0fb52492a"} Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.235170 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6bca3c2c53fb61f226a0a2ba2ab7cf25f82843a55c7462f748155dc4b56c9350"} Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.236788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6847da006d35650f40e04879bdae567b0c10917ce62a6aeb3d64e865c36ffcd6"} Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.238329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" event={"ID":"665ba061-eec9-43db-83da-694c1e1e2cad","Type":"ContainerStarted","Data":"3aaebaad3d86600d4ebeec83eb5c73555f8082450dbded50c038e359655baf98"} Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.241555 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerStarted","Data":"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05"} Jan 21 21:13:05 crc kubenswrapper[4860]: I0121 21:13:05.524691 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-fk8j2"] Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.254907 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"46ed03418eb72f04b7db7d73a82bbc1311b4337ce844c77c5181bc92d1165f69"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.258358 4860 generic.go:334] "Generic (PLEG): container finished" podID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerID="d5f015bafb58829f24dcf1f2a4bba53e99d5d391c44f4c0768c5f75809553329" exitCode=0 Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.258443 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerDied","Data":"d5f015bafb58829f24dcf1f2a4bba53e99d5d391c44f4c0768c5f75809553329"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.259967 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" event={"ID":"665ba061-eec9-43db-83da-694c1e1e2cad","Type":"ContainerStarted","Data":"ced3032f43325ae105e5c8c2d4bf8422b7b1303494124253ac1aafcab2f2c633"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.260991 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.270637 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" event={"ID":"9ae29d0a-414f-4cc8-915c-7400988ae3e9","Type":"ContainerStarted","Data":"b641c0915769e39a1dfe5a7ddff6d8666996b6683ec68b90da7a9d5e7d8fa46d"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.270689 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" event={"ID":"9ae29d0a-414f-4cc8-915c-7400988ae3e9","Type":"ContainerStarted","Data":"44a4ef7cb105c8d027814114cd27b0c98212591c5db450c4a9be6f960c2a7880"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.277163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2005a0de2180d00eee0a5ef5223809af49b076068cd1aea12f642a99c4a057dd"} Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.277619 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.351871 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" podStartSLOduration=12.351853661 podStartE2EDuration="12.351853661s" podCreationTimestamp="2026-01-21 21:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:13:06.329953228 +0000 UTC m=+278.552131708" watchObservedRunningTime="2026-01-21 21:13:06.351853661 +0000 UTC m=+278.574032121" Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.354651 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:13:06 crc kubenswrapper[4860]: I0121 21:13:06.387437 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m2slz" podStartSLOduration=10.946802577 podStartE2EDuration="1m46.387404563s" podCreationTimestamp="2026-01-21 21:11:20 +0000 UTC" firstStartedPulling="2026-01-21 21:11:27.425947695 +0000 UTC m=+179.648126165" lastFinishedPulling="2026-01-21 21:13:02.866549681 +0000 UTC m=+275.088728151" observedRunningTime="2026-01-21 21:13:06.381164389 +0000 UTC m=+278.603342869" watchObservedRunningTime="2026-01-21 21:13:06.387404563 +0000 UTC m=+278.609583033" Jan 21 21:13:07 crc kubenswrapper[4860]: I0121 21:13:07.284216 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerID="c951dbd71470121fe3731102993ef5ca99c731cf2887e3cb52f16ecea1a8eb47" exitCode=0 Jan 21 21:13:07 crc kubenswrapper[4860]: I0121 21:13:07.284310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerDied","Data":"c951dbd71470121fe3731102993ef5ca99c731cf2887e3cb52f16ecea1a8eb47"} Jan 21 21:13:07 crc kubenswrapper[4860]: I0121 21:13:07.343679 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" podStartSLOduration=37.343655443 podStartE2EDuration="37.343655443s" podCreationTimestamp="2026-01-21 21:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:13:07.341757746 +0000 UTC m=+279.563936236" watchObservedRunningTime="2026-01-21 21:13:07.343655443 +0000 UTC m=+279.565833923" Jan 21 21:13:09 crc kubenswrapper[4860]: I0121 21:13:09.297283 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerStarted","Data":"eb18c30d9d4e28b2996d4dbd0c3bc5c047237a26f1e8fb1dfda892239d53c904"} Jan 21 21:13:09 crc kubenswrapper[4860]: I0121 21:13:09.300152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerStarted","Data":"5a73c9072c764ef54beed91bfc7fb402cc45f4f3004944a84444b31bb41a1d45"} Jan 21 21:13:09 crc kubenswrapper[4860]: I0121 21:13:09.321801 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9rgh9" podStartSLOduration=7.680334727 podStartE2EDuration="1m47.321778243s" podCreationTimestamp="2026-01-21 21:11:22 +0000 UTC" firstStartedPulling="2026-01-21 21:11:28.618030332 +0000 UTC m=+180.840208802" lastFinishedPulling="2026-01-21 21:13:08.259473838 +0000 UTC m=+280.481652318" observedRunningTime="2026-01-21 21:13:09.320604891 +0000 UTC m=+281.542783361" watchObservedRunningTime="2026-01-21 21:13:09.321778243 +0000 UTC m=+281.543956713" Jan 21 21:13:09 crc kubenswrapper[4860]: I0121 21:13:09.343015 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ngmkj" podStartSLOduration=7.75127159 podStartE2EDuration="1m47.342985872s" podCreationTimestamp="2026-01-21 21:11:22 +0000 UTC" firstStartedPulling="2026-01-21 21:11:28.688177677 +0000 UTC m=+180.910356147" lastFinishedPulling="2026-01-21 21:13:08.279891959 +0000 UTC m=+280.502070429" observedRunningTime="2026-01-21 21:13:09.339836749 +0000 UTC m=+281.562015229" watchObservedRunningTime="2026-01-21 21:13:09.342985872 +0000 UTC m=+281.565164352" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.590270 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.591438 4860 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.591599 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.592447 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882" gracePeriod=15 Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.592416 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84" gracePeriod=15 Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.592689 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27" gracePeriod=15 Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.591768 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4" gracePeriod=15 Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.592738 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680" gracePeriod=15 Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.593124 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595146 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595171 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595195 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595203 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595216 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595224 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595238 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595244 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595258 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595265 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.595282 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595289 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595762 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595788 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595798 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595806 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595818 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.595834 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.596083 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.596097 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.652476 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717410 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717466 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717506 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717521 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717571 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717631 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717660 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.717692 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.818811 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.818871 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.818911 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.818986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819004 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819036 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819052 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819094 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819164 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819209 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819230 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819252 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819273 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819295 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819314 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.819338 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: I0121 21:13:11.950315 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:11 crc kubenswrapper[4860]: W0121 21:13:11.981290 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1c322f23a9e4ade76e338b6290db9fe8f850597806eea6ced7270c038499d16d WatchSource:0}: Error finding container 1c322f23a9e4ade76e338b6290db9fe8f850597806eea6ced7270c038499d16d: Status 404 returned error can't find the container with id 1c322f23a9e4ade76e338b6290db9fe8f850597806eea6ced7270c038499d16d Jan 21 21:13:11 crc kubenswrapper[4860]: E0121 21:13:11.984017 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cdb67ad2ba3e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 21:13:11.983195108 +0000 UTC m=+284.205373578,LastTimestamp:2026-01-21 21:13:11.983195108 +0000 UTC m=+284.205373578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.201090 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.201153 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.245423 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.245527 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.318682 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1c322f23a9e4ade76e338b6290db9fe8f850597806eea6ced7270c038499d16d"} Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.353586 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.353971 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.371598 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.372826 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:13:12 crc kubenswrapper[4860]: I0121 21:13:12.739708 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:12 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:12 crc kubenswrapper[4860]: > Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.250319 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m2slz" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:13 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:13 crc kubenswrapper[4860]: > Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.395290 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.395727 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.401483 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l87hr" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:13 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:13 crc kubenswrapper[4860]: > Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.403916 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9dqdq" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:13 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:13 crc kubenswrapper[4860]: > Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.720270 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.720974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.721440 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.721923 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.722496 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.722761 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.723057 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.730391 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.745750 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.750184 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.750744 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.751012 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.753418 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.841319 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.842304 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.843135 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.843679 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:13 crc kubenswrapper[4860]: I0121 21:13:13.843983 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.027329 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.027463 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.436884 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.439236 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ngmkj" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:14 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:14 crc kubenswrapper[4860]: > Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.440583 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441486 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27" exitCode=0 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441508 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4" exitCode=0 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441516 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680" exitCode=0 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441524 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882" exitCode=2 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441531 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84" exitCode=0 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.441607 4860 scope.go:117] "RemoveContainer" containerID="9826b2d2a712ed6a40915d6ae89c3a3fa3f431f108e89d83c97e34b1eb4e8cae" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.443437 4860 generic.go:334] "Generic (PLEG): container finished" podID="517ce25f-4d56-4696-9b6a-eba3e518584c" containerID="f4bb6fc008da451907bfa9e63e25f75b8bcf4bfc5d97d3543513421a65cdc2e7" exitCode=0 Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.443496 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"517ce25f-4d56-4696-9b6a-eba3e518584c","Type":"ContainerDied","Data":"f4bb6fc008da451907bfa9e63e25f75b8bcf4bfc5d97d3543513421a65cdc2e7"} Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.444719 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.445239 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.445469 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.445683 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.445914 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.445972 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101"} Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.447474 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.447689 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.447921 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.449095 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.449451 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.488759 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.490001 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.490773 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.491566 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.492017 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:14 crc kubenswrapper[4860]: I0121 21:13:14.492415 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.066061 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9rgh9" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="registry-server" probeResult="failure" output=< Jan 21 21:13:15 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:13:15 crc kubenswrapper[4860]: > Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.836569 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.837607 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.837918 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.838228 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.838449 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.838724 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.848459 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849301 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access\") pod \"517ce25f-4d56-4696-9b6a-eba3e518584c\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849363 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir\") pod \"517ce25f-4d56-4696-9b6a-eba3e518584c\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849479 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock\") pod \"517ce25f-4d56-4696-9b6a-eba3e518584c\" (UID: \"517ce25f-4d56-4696-9b6a-eba3e518584c\") " Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849510 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "517ce25f-4d56-4696-9b6a-eba3e518584c" (UID: "517ce25f-4d56-4696-9b6a-eba3e518584c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849626 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock" (OuterVolumeSpecName: "var-lock") pod "517ce25f-4d56-4696-9b6a-eba3e518584c" (UID: "517ce25f-4d56-4696-9b6a-eba3e518584c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849674 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849854 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.849968 4860 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/517ce25f-4d56-4696-9b6a-eba3e518584c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.850512 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.851178 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.851504 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.851796 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.852102 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.852436 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.857105 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "517ce25f-4d56-4696-9b6a-eba3e518584c" (UID: "517ce25f-4d56-4696-9b6a-eba3e518584c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:13:15 crc kubenswrapper[4860]: I0121 21:13:15.951091 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517ce25f-4d56-4696-9b6a-eba3e518584c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052519 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052558 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052564 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052643 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.052700 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.053042 4860 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.053058 4860 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.053066 4860 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.253286 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.253610 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.254009 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.254316 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.254621 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.254662 4860 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.255032 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.459917 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.469450 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.471488 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.471495 4860 scope.go:117] "RemoveContainer" containerID="0be4ea0485f972445595c96d20456deb90fd35d118646fc9c38da6e36bf02d27" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.474196 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"517ce25f-4d56-4696-9b6a-eba3e518584c","Type":"ContainerDied","Data":"bb77d14cc6561f39438de47eede9a259b8e98ed582ada2be066278d5a5b4c380"} Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.474240 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb77d14cc6561f39438de47eede9a259b8e98ed582ada2be066278d5a5b4c380" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.474261 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.493866 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.494219 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.494572 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.494893 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.495272 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.495533 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.495809 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.496094 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.496258 4860 scope.go:117] "RemoveContainer" containerID="d4e1bf61677c72c2cf0659aa1bf11fb85a98091f59773e92f5a9b3610f7e30e4" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.496399 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.496692 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.496984 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.497325 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.513022 4860 scope.go:117] "RemoveContainer" containerID="2f52e9fca7c78c483898dc8ada6cd59a2187df53327909b56be18c922f0f9680" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.530528 4860 scope.go:117] "RemoveContainer" containerID="d0b86dc5e0a223e7708c6fa2a63b77321358a50683781bb770da6090f750e882" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.550076 4860 scope.go:117] "RemoveContainer" containerID="8753d2408ab81a37ee27932e748eac7cc9665026c58d9f37c92b7f88087d7d84" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.567213 4860 scope.go:117] "RemoveContainer" containerID="18404709b4198ed7c9229b1249ec3d0c058498643322cf1196f16c17aaf27f7b" Jan 21 21:13:16 crc kubenswrapper[4860]: I0121 21:13:16.587378 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.662775 4860 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" volumeName="registry-storage" Jan 21 21:13:16 crc kubenswrapper[4860]: E0121 21:13:16.860997 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Jan 21 21:13:17 crc kubenswrapper[4860]: E0121 21:13:17.736627 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Jan 21 21:13:18 crc kubenswrapper[4860]: I0121 21:13:18.581834 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: I0121 21:13:18.582830 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: I0121 21:13:18.583246 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: I0121 21:13:18.583799 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: I0121 21:13:18.584108 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.623306 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:13:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:13:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:13:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T21:13:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.623717 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.623910 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.624186 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.624446 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:18 crc kubenswrapper[4860]: E0121 21:13:18.624465 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 21:13:19 crc kubenswrapper[4860]: E0121 21:13:19.337966 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Jan 21 21:13:21 crc kubenswrapper[4860]: E0121 21:13:21.542034 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cdb67ad2ba3e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 21:13:11.983195108 +0000 UTC m=+284.205373578,LastTimestamp:2026-01-21 21:13:11.983195108 +0000 UTC m=+284.205373578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.742263 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.743232 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.744118 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.744502 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.744787 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.745067 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.745341 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.782574 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.783269 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.783632 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.784047 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.784390 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.784737 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:21 crc kubenswrapper[4860]: I0121 21:13:21.785048 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.242168 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.242872 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.243103 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.244111 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.244431 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.244825 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.245097 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.245293 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.279964 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.280868 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.281698 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.282336 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.282618 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.282909 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.283286 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.283572 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.283707 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.284328 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.284689 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.285011 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.285327 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.285614 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.285896 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.286161 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.286472 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.322630 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.323079 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.323529 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.323767 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.323975 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.324144 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.324298 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.324450 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.324616 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.400094 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.401731 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.402028 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.402338 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.403038 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.403432 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.403725 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.404032 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.404261 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.404711 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.439953 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.441661 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.442203 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.442597 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.442824 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.443015 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.443277 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.443519 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.443833 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.444042 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: E0121 21:13:22.538616 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="6.4s" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.578769 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.579811 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.581295 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.581853 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.582280 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.582699 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.583005 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.583371 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.583760 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.584063 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.597698 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.597741 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:22 crc kubenswrapper[4860]: E0121 21:13:22.598224 4860 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:22 crc kubenswrapper[4860]: I0121 21:13:22.598785 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:22 crc kubenswrapper[4860]: W0121 21:13:22.633588 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a5d2ee3bd9962e8e334341f11b06c5093e5714ac821615fc821f814a919f1a90 WatchSource:0}: Error finding container a5d2ee3bd9962e8e334341f11b06c5093e5714ac821615fc821f814a919f1a90: Status 404 returned error can't find the container with id a5d2ee3bd9962e8e334341f11b06c5093e5714ac821615fc821f814a919f1a90 Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.390798 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.391533 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.391805 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.392050 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.392403 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.392956 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.393284 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" pod="openshift-marketplace/redhat-operators-ngmkj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ngmkj\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.393598 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.393889 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.394324 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.394619 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.436536 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.437269 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.437720 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.438019 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.438292 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.438615 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.438899 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.439301 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.439960 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.440226 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.440502 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" pod="openshift-marketplace/redhat-operators-ngmkj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ngmkj\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.527414 4860 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7803177d0875d3905615958617468cf8c12addc82a4fd85a1921a1ebb6a2bde4" exitCode=0 Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.527507 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7803177d0875d3905615958617468cf8c12addc82a4fd85a1921a1ebb6a2bde4"} Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.527571 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5d2ee3bd9962e8e334341f11b06c5093e5714ac821615fc821f814a919f1a90"} Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.527908 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.527928 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.528360 4860 status_manager.go:851] "Failed to get status for pod" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" pod="openshift-marketplace/community-operators-9dqdq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9dqdq\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: E0121 21:13:23.528356 4860 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.528596 4860 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.528807 4860 status_manager.go:851] "Failed to get status for pod" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" pod="openshift-marketplace/certified-operators-gzkdc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkdc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.529030 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" pod="openshift-marketplace/redhat-operators-ngmkj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ngmkj\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.529269 4860 status_manager.go:851] "Failed to get status for pod" podUID="adf72aac-c719-4347-824a-c033f4f3a240" pod="openshift-marketplace/community-operators-m2slz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m2slz\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.529455 4860 status_manager.go:851] "Failed to get status for pod" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" pod="openshift-marketplace/redhat-marketplace-z6kb9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z6kb9\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.529643 4860 status_manager.go:851] "Failed to get status for pod" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.530001 4860 status_manager.go:851] "Failed to get status for pod" podUID="9ae29d0a-414f-4cc8-915c-7400988ae3e9" pod="openshift-authentication/oauth-openshift-f54c45747-fk8j2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-f54c45747-fk8j2\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.530261 4860 status_manager.go:851] "Failed to get status for pod" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" pod="openshift-marketplace/certified-operators-l87hr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l87hr\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:23 crc kubenswrapper[4860]: I0121 21:13:23.530571 4860 status_manager.go:851] "Failed to get status for pod" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" pod="openshift-marketplace/redhat-marketplace-zh97n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zh97n\": dial tcp 38.102.83.227:6443: connect: connection refused" Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.079355 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.122408 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.539434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a1ab38132a5707aaec78f673ff1c61ceb6d0822e489b349c882fd844748949b7"} Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.539496 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"03a5f52bdd5f6bef7b2894f8237548cbc7b535c908e39035e90ac1b81d63bfed"} Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.539509 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"12112036a4410b63762478de76f878df35a4fbe2f0f122bac3c767fcf152584e"} Jan 21 21:13:24 crc kubenswrapper[4860]: I0121 21:13:24.539521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad1d391e9bc47d9c5a62f742a97b9bdbb04cd0843860085ed3c6c1ba25c4f5ce"} Jan 21 21:13:25 crc kubenswrapper[4860]: I0121 21:13:25.552467 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d011a81638dabcce224488c3b0e4c9c4c1fb69bb4880b6ba860713513dd80100"} Jan 21 21:13:25 crc kubenswrapper[4860]: I0121 21:13:25.552769 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:25 crc kubenswrapper[4860]: I0121 21:13:25.552784 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:25 crc kubenswrapper[4860]: I0121 21:13:25.553165 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:26 crc kubenswrapper[4860]: I0121 21:13:26.561528 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 21:13:26 crc kubenswrapper[4860]: I0121 21:13:26.561600 4860 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e" exitCode=1 Jan 21 21:13:26 crc kubenswrapper[4860]: I0121 21:13:26.561643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e"} Jan 21 21:13:26 crc kubenswrapper[4860]: I0121 21:13:26.562484 4860 scope.go:117] "RemoveContainer" containerID="b75ed389310cfb9bebf5236bb929928dcd30d5db9fa00de0d666f19691f9607e" Jan 21 21:13:27 crc kubenswrapper[4860]: I0121 21:13:27.573534 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 21:13:27 crc kubenswrapper[4860]: I0121 21:13:27.573603 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cb3ab49a73fa70443c121300ee895c4b6a81892879fde4d0e01074fb2e9d4cf"} Jan 21 21:13:27 crc kubenswrapper[4860]: I0121 21:13:27.599393 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:27 crc kubenswrapper[4860]: I0121 21:13:27.599751 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:27 crc kubenswrapper[4860]: I0121 21:13:27.605041 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:28 crc kubenswrapper[4860]: I0121 21:13:28.428982 4860 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.141680 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.146658 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.334331 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.560070 4860 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.719665 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.719694 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.725806 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:30 crc kubenswrapper[4860]: I0121 21:13:30.728713 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ccb07ff0-106f-4d93-ae0b-d48f0bdc8f23" Jan 21 21:13:31 crc kubenswrapper[4860]: I0121 21:13:31.726093 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:31 crc kubenswrapper[4860]: I0121 21:13:31.726432 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1e5e6715-eead-4da4-b376-f7d87b89e7b7" Jan 21 21:13:36 crc kubenswrapper[4860]: I0121 21:13:36.917573 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 21:13:37 crc kubenswrapper[4860]: I0121 21:13:37.452062 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 21:13:37 crc kubenswrapper[4860]: I0121 21:13:37.452831 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.134800 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.438073 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.527057 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.559622 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.598047 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ccb07ff0-106f-4d93-ae0b-d48f0bdc8f23" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.754013 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 21:13:38 crc kubenswrapper[4860]: I0121 21:13:38.910149 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 21:13:39 crc kubenswrapper[4860]: I0121 21:13:39.492427 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 21:13:39 crc kubenswrapper[4860]: I0121 21:13:39.845335 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 21:13:40 crc kubenswrapper[4860]: I0121 21:13:40.283286 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 21:13:40 crc kubenswrapper[4860]: I0121 21:13:40.337770 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 21:13:40 crc kubenswrapper[4860]: I0121 21:13:40.454215 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 21:13:40 crc kubenswrapper[4860]: I0121 21:13:40.660052 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 21:13:41 crc kubenswrapper[4860]: I0121 21:13:41.141183 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 21:13:41 crc kubenswrapper[4860]: I0121 21:13:41.425085 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 21:13:41 crc kubenswrapper[4860]: I0121 21:13:41.577346 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 21:13:42 crc kubenswrapper[4860]: I0121 21:13:42.130077 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 21:13:42 crc kubenswrapper[4860]: I0121 21:13:42.206996 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 21:13:42 crc kubenswrapper[4860]: I0121 21:13:42.475677 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 21:13:42 crc kubenswrapper[4860]: I0121 21:13:42.919042 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 21:13:42 crc kubenswrapper[4860]: I0121 21:13:42.976706 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.000663 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.203103 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.357526 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.515666 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.629006 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.632288 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.752917 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.854420 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 21:13:43 crc kubenswrapper[4860]: I0121 21:13:43.934662 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.040073 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.325172 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.528725 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.542117 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.616253 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 21:13:44 crc kubenswrapper[4860]: I0121 21:13:44.972393 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.030486 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.040175 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.076045 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.197093 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.238835 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.287509 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.311419 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.335281 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.419366 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.554149 4860 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.638908 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.736419 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.796365 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.796373 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 21:13:45 crc kubenswrapper[4860]: I0121 21:13:45.841178 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.202605 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.241433 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.271393 4860 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.353493 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.358059 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.387139 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.426561 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.448698 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.459623 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.549192 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.635370 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.642728 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.714617 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.876213 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 21:13:46 crc kubenswrapper[4860]: I0121 21:13:46.961816 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.045201 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.211218 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.222399 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.290124 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.294267 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.305788 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.430148 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.527058 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.584682 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.708887 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.781152 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.803550 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.820268 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.842823 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.893827 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.925154 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.954924 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.967199 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 21:13:47 crc kubenswrapper[4860]: I0121 21:13:47.983602 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.115051 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.125497 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.261612 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.277317 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.301689 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.321405 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.377537 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.390043 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.398289 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.441818 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.457332 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.574649 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.600900 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.602112 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.608472 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.663646 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.687188 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 21:13:48 crc kubenswrapper[4860]: I0121 21:13:48.786616 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.044207 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.105150 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.195600 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.307056 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.341386 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.349448 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.364464 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.490762 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.730651 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.732978 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.778672 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.826090 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.837184 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.863426 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.885908 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 21:13:49 crc kubenswrapper[4860]: I0121 21:13:49.999492 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.054509 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.098466 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.123279 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.360149 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.389046 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.397879 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.514463 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.598584 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.598630 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.605955 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.642481 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.899152 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 21:13:50 crc kubenswrapper[4860]: I0121 21:13:50.971322 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.009781 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.033514 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.141011 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.226233 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.262805 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.267771 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.349552 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.349700 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.355090 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.457905 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.460883 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.560408 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.563366 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.683227 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.702171 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.776985 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.803851 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.875727 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.931709 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.975984 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.976534 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.983798 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 21:13:51 crc kubenswrapper[4860]: I0121 21:13:51.985365 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.011514 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.014892 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.124765 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.188920 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.198308 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.198696 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.224343 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.249607 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.258173 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.320353 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.323401 4860 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.326343 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.326292191 podStartE2EDuration="41.326292191s" podCreationTimestamp="2026-01-21 21:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:13:30.287622261 +0000 UTC m=+302.509800731" watchObservedRunningTime="2026-01-21 21:13:52.326292191 +0000 UTC m=+324.548470661" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.330334 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.330409 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.335805 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.353041 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.353022967 podStartE2EDuration="22.353022967s" podCreationTimestamp="2026-01-21 21:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:13:52.351836815 +0000 UTC m=+324.574015285" watchObservedRunningTime="2026-01-21 21:13:52.353022967 +0000 UTC m=+324.575201447" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.377579 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.452310 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.465353 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.556434 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.676654 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.695246 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.710882 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.746518 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.801853 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.852400 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.936381 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.961196 4860 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.961591 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101" gracePeriod=5 Jan 21 21:13:52 crc kubenswrapper[4860]: I0121 21:13:52.962125 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.155327 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.155757 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.200410 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.278725 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.282204 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.335543 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.364086 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.398481 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.414256 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.621252 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.718865 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.740831 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.906076 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 21:13:53 crc kubenswrapper[4860]: I0121 21:13:53.946773 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.056329 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.249626 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.349699 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.389552 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.390080 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.415421 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.518925 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.532306 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.567607 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.773027 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.935586 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.958415 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 21:13:54 crc kubenswrapper[4860]: I0121 21:13:54.994371 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.111327 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.131215 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.139247 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.159708 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.168527 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.179822 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.194003 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.227109 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.300849 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.342923 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.518049 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.629033 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.911591 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 21:13:55 crc kubenswrapper[4860]: I0121 21:13:55.946478 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.036342 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.076593 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.085156 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.140145 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.172460 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.229817 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.241998 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.417031 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.513260 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.694117 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.803871 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.883675 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.887925 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.916966 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 21:13:56 crc kubenswrapper[4860]: I0121 21:13:56.945625 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 21:13:57 crc kubenswrapper[4860]: I0121 21:13:57.616624 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 21:13:57 crc kubenswrapper[4860]: I0121 21:13:57.821137 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 21:13:57 crc kubenswrapper[4860]: I0121 21:13:57.920067 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.006097 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.197814 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.242801 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.274262 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.540522 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.540946 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.587065 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.599601 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.599642 4860 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a5f6e8a0-c60f-4f61-9b6a-ee3debe42f28" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.603200 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.603227 4860 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a5f6e8a0-c60f-4f61-9b6a-ee3debe42f28" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.680627 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.688157 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.723824 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.723951 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.723996 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724068 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724106 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724122 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724191 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724234 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724249 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724475 4860 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724490 4860 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724502 4860 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.724513 4860 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.734203 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.826032 4860 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.912572 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.912988 4860 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101" exitCode=137 Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.913083 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.913180 4860 scope.go:117] "RemoveContainer" containerID="9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.913810 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.942636 4860 scope.go:117] "RemoveContainer" containerID="9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101" Jan 21 21:13:58 crc kubenswrapper[4860]: E0121 21:13:58.944512 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101\": container with ID starting with 9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101 not found: ID does not exist" containerID="9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101" Jan 21 21:13:58 crc kubenswrapper[4860]: I0121 21:13:58.944602 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101"} err="failed to get container status \"9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101\": rpc error: code = NotFound desc = could not find container \"9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101\": container with ID starting with 9133c3d3e72d8de46e3975c39abf4a0bdf10178f400dcdf7f89bc1f8138a1101 not found: ID does not exist" Jan 21 21:13:59 crc kubenswrapper[4860]: I0121 21:13:59.064472 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 21:13:59 crc kubenswrapper[4860]: I0121 21:13:59.527295 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 21:13:59 crc kubenswrapper[4860]: I0121 21:13:59.575450 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 21:13:59 crc kubenswrapper[4860]: I0121 21:13:59.704621 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 21:13:59 crc kubenswrapper[4860]: I0121 21:13:59.950886 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 21:14:00 crc kubenswrapper[4860]: I0121 21:14:00.029293 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 21:14:00 crc kubenswrapper[4860]: I0121 21:14:00.538654 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 21:14:00 crc kubenswrapper[4860]: I0121 21:14:00.589088 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 21:14:01 crc kubenswrapper[4860]: I0121 21:14:01.061043 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 21:14:14 crc kubenswrapper[4860]: I0121 21:14:14.625736 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:14:14 crc kubenswrapper[4860]: I0121 21:14:14.626600 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" podUID="665ba061-eec9-43db-83da-694c1e1e2cad" containerName="controller-manager" containerID="cri-o://ced3032f43325ae105e5c8c2d4bf8422b7b1303494124253ac1aafcab2f2c633" gracePeriod=30 Jan 21 21:14:14 crc kubenswrapper[4860]: I0121 21:14:14.720994 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:14:14 crc kubenswrapper[4860]: I0121 21:14:14.721313 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerName="route-controller-manager" containerID="cri-o://77f4ac959945ddafeedb41ddf7b6556236de13e3179a45a8da5e584d672a9e6d" gracePeriod=30 Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.035585 4860 generic.go:334] "Generic (PLEG): container finished" podID="665ba061-eec9-43db-83da-694c1e1e2cad" containerID="ced3032f43325ae105e5c8c2d4bf8422b7b1303494124253ac1aafcab2f2c633" exitCode=0 Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.035733 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" event={"ID":"665ba061-eec9-43db-83da-694c1e1e2cad","Type":"ContainerDied","Data":"ced3032f43325ae105e5c8c2d4bf8422b7b1303494124253ac1aafcab2f2c633"} Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.037156 4860 generic.go:334] "Generic (PLEG): container finished" podID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerID="77f4ac959945ddafeedb41ddf7b6556236de13e3179a45a8da5e584d672a9e6d" exitCode=0 Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.037179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" event={"ID":"f189d1a5-8e93-4d4d-b11d-29c60e3c3106","Type":"ContainerDied","Data":"77f4ac959945ddafeedb41ddf7b6556236de13e3179a45a8da5e584d672a9e6d"} Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.070118 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.125581 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca\") pod \"665ba061-eec9-43db-83da-694c1e1e2cad\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.125812 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert\") pod \"665ba061-eec9-43db-83da-694c1e1e2cad\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.125876 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles\") pod \"665ba061-eec9-43db-83da-694c1e1e2cad\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.125984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config\") pod \"665ba061-eec9-43db-83da-694c1e1e2cad\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.126035 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k68s\" (UniqueName: \"kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s\") pod \"665ba061-eec9-43db-83da-694c1e1e2cad\" (UID: \"665ba061-eec9-43db-83da-694c1e1e2cad\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.126779 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca" (OuterVolumeSpecName: "client-ca") pod "665ba061-eec9-43db-83da-694c1e1e2cad" (UID: "665ba061-eec9-43db-83da-694c1e1e2cad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.126875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config" (OuterVolumeSpecName: "config") pod "665ba061-eec9-43db-83da-694c1e1e2cad" (UID: "665ba061-eec9-43db-83da-694c1e1e2cad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.127513 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "665ba061-eec9-43db-83da-694c1e1e2cad" (UID: "665ba061-eec9-43db-83da-694c1e1e2cad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.131676 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s" (OuterVolumeSpecName: "kube-api-access-8k68s") pod "665ba061-eec9-43db-83da-694c1e1e2cad" (UID: "665ba061-eec9-43db-83da-694c1e1e2cad"). InnerVolumeSpecName "kube-api-access-8k68s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.142074 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "665ba061-eec9-43db-83da-694c1e1e2cad" (UID: "665ba061-eec9-43db-83da-694c1e1e2cad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.192819 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.227068 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca\") pod \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.227165 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert\") pod \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.227248 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkk94\" (UniqueName: \"kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94\") pod \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228036 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca" (OuterVolumeSpecName: "client-ca") pod "f189d1a5-8e93-4d4d-b11d-29c60e3c3106" (UID: "f189d1a5-8e93-4d4d-b11d-29c60e3c3106"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228177 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config\") pod \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\" (UID: \"f189d1a5-8e93-4d4d-b11d-29c60e3c3106\") " Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228624 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228661 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228679 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k68s\" (UniqueName: \"kubernetes.io/projected/665ba061-eec9-43db-83da-694c1e1e2cad-kube-api-access-8k68s\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228692 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228691 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config" (OuterVolumeSpecName: "config") pod "f189d1a5-8e93-4d4d-b11d-29c60e3c3106" (UID: "f189d1a5-8e93-4d4d-b11d-29c60e3c3106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228703 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665ba061-eec9-43db-83da-694c1e1e2cad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.228717 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/665ba061-eec9-43db-83da-694c1e1e2cad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.231373 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94" (OuterVolumeSpecName: "kube-api-access-fkk94") pod "f189d1a5-8e93-4d4d-b11d-29c60e3c3106" (UID: "f189d1a5-8e93-4d4d-b11d-29c60e3c3106"). InnerVolumeSpecName "kube-api-access-fkk94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.231506 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f189d1a5-8e93-4d4d-b11d-29c60e3c3106" (UID: "f189d1a5-8e93-4d4d-b11d-29c60e3c3106"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.329795 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.329863 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkk94\" (UniqueName: \"kubernetes.io/projected/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-kube-api-access-fkk94\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:15 crc kubenswrapper[4860]: I0121 21:14:15.329878 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f189d1a5-8e93-4d4d-b11d-29c60e3c3106-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.045052 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.046729 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.045063 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l" event={"ID":"f189d1a5-8e93-4d4d-b11d-29c60e3c3106","Type":"ContainerDied","Data":"f8c4ca8a1ac3eb38a3d40310d30c3f45023b4928ce42741644371f69c766a823"} Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.048089 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7646f58b4-9d4qz" event={"ID":"665ba061-eec9-43db-83da-694c1e1e2cad","Type":"ContainerDied","Data":"3aaebaad3d86600d4ebeec83eb5c73555f8082450dbded50c038e359655baf98"} Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.048130 4860 scope.go:117] "RemoveContainer" containerID="77f4ac959945ddafeedb41ddf7b6556236de13e3179a45a8da5e584d672a9e6d" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.069630 4860 scope.go:117] "RemoveContainer" containerID="ced3032f43325ae105e5c8c2d4bf8422b7b1303494124253ac1aafcab2f2c633" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.094545 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.100264 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7646f58b4-9d4qz"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.111589 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.115718 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ddbd7fbcf-pjx9l"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.121795 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:16 crc kubenswrapper[4860]: E0121 21:14:16.122612 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.122702 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 21:14:16 crc kubenswrapper[4860]: E0121 21:14:16.122785 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665ba061-eec9-43db-83da-694c1e1e2cad" containerName="controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.122851 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="665ba061-eec9-43db-83da-694c1e1e2cad" containerName="controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: E0121 21:14:16.122952 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerName="route-controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123029 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerName="route-controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: E0121 21:14:16.123100 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" containerName="installer" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123177 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" containerName="installer" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123514 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" containerName="route-controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123600 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="517ce25f-4d56-4696-9b6a-eba3e518584c" containerName="installer" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123668 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.123768 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="665ba061-eec9-43db-83da-694c1e1e2cad" containerName="controller-manager" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.124523 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.127151 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.127456 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.128586 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.129609 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.130481 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.130715 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133007 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133249 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133321 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133489 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133071 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133873 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.133870 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.134098 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.134442 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.137322 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.138968 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.238333 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.238697 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.238865 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.238977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.239109 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlks\" (UniqueName: \"kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.239224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.239388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.239628 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.239798 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52kz5\" (UniqueName: \"kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341376 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341433 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341491 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341539 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlks\" (UniqueName: \"kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341733 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341796 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.341823 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52kz5\" (UniqueName: \"kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.342831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.343316 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.343358 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.345449 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.346721 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.346887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.350685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.364893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52kz5\" (UniqueName: \"kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5\") pod \"controller-manager-7cd856c7d8-9zmbb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.365458 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlks\" (UniqueName: \"kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks\") pod \"route-controller-manager-684f8df48d-g2xqm\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.453334 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.461162 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.590872 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665ba061-eec9-43db-83da-694c1e1e2cad" path="/var/lib/kubelet/pods/665ba061-eec9-43db-83da-694c1e1e2cad/volumes" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.592816 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f189d1a5-8e93-4d4d-b11d-29c60e3c3106" path="/var/lib/kubelet/pods/f189d1a5-8e93-4d4d-b11d-29c60e3c3106/volumes" Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.688309 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:16 crc kubenswrapper[4860]: W0121 21:14:16.698308 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3077cd96_27db_40f8_8127_e443ba72fd79.slice/crio-f96949970b4853cc28fa399e83faaafc1698778a2f6d4bc603c4fbe9bb7df9a9 WatchSource:0}: Error finding container f96949970b4853cc28fa399e83faaafc1698778a2f6d4bc603c4fbe9bb7df9a9: Status 404 returned error can't find the container with id f96949970b4853cc28fa399e83faaafc1698778a2f6d4bc603c4fbe9bb7df9a9 Jan 21 21:14:16 crc kubenswrapper[4860]: I0121 21:14:16.734840 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:16 crc kubenswrapper[4860]: W0121 21:14:16.738027 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369d1f81_d57b_4b2a_a0f9_ddfb9581dbfb.slice/crio-f235abc9974f6bd2d499a9ee5690933fc38ad959a3698ecf6377e1a915ce29cd WatchSource:0}: Error finding container f235abc9974f6bd2d499a9ee5690933fc38ad959a3698ecf6377e1a915ce29cd: Status 404 returned error can't find the container with id f235abc9974f6bd2d499a9ee5690933fc38ad959a3698ecf6377e1a915ce29cd Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.054569 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" event={"ID":"3077cd96-27db-40f8-8127-e443ba72fd79","Type":"ContainerStarted","Data":"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2"} Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.055038 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.055059 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" event={"ID":"3077cd96-27db-40f8-8127-e443ba72fd79","Type":"ContainerStarted","Data":"f96949970b4853cc28fa399e83faaafc1698778a2f6d4bc603c4fbe9bb7df9a9"} Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.060829 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" event={"ID":"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb","Type":"ContainerStarted","Data":"9702247cd452c0d7c73b576140f73d08600c4139d3eae10e4928877250c5b34b"} Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.060910 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" event={"ID":"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb","Type":"ContainerStarted","Data":"f235abc9974f6bd2d499a9ee5690933fc38ad959a3698ecf6377e1a915ce29cd"} Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.081732 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" podStartSLOduration=3.081707599 podStartE2EDuration="3.081707599s" podCreationTimestamp="2026-01-21 21:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:14:17.077747625 +0000 UTC m=+349.299926105" watchObservedRunningTime="2026-01-21 21:14:17.081707599 +0000 UTC m=+349.303886079" Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.482789 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:17 crc kubenswrapper[4860]: I0121 21:14:17.510237 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" podStartSLOduration=3.510207763 podStartE2EDuration="3.510207763s" podCreationTimestamp="2026-01-21 21:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:14:17.098503515 +0000 UTC m=+349.320681985" watchObservedRunningTime="2026-01-21 21:14:17.510207763 +0000 UTC m=+349.732386223" Jan 21 21:14:18 crc kubenswrapper[4860]: I0121 21:14:18.068462 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:18 crc kubenswrapper[4860]: I0121 21:14:18.073042 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:32 crc kubenswrapper[4860]: I0121 21:14:32.103471 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:14:32 crc kubenswrapper[4860]: I0121 21:14:32.104126 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:14:34 crc kubenswrapper[4860]: I0121 21:14:34.611314 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:34 crc kubenswrapper[4860]: I0121 21:14:34.611957 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" podUID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" containerName="controller-manager" containerID="cri-o://9702247cd452c0d7c73b576140f73d08600c4139d3eae10e4928877250c5b34b" gracePeriod=30 Jan 21 21:14:34 crc kubenswrapper[4860]: I0121 21:14:34.636532 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:34 crc kubenswrapper[4860]: I0121 21:14:34.636770 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" podUID="3077cd96-27db-40f8-8127-e443ba72fd79" containerName="route-controller-manager" containerID="cri-o://0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2" gracePeriod=30 Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.160253 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.176871 4860 generic.go:334] "Generic (PLEG): container finished" podID="3077cd96-27db-40f8-8127-e443ba72fd79" containerID="0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2" exitCode=0 Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.177046 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.178900 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" event={"ID":"3077cd96-27db-40f8-8127-e443ba72fd79","Type":"ContainerDied","Data":"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2"} Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.178983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm" event={"ID":"3077cd96-27db-40f8-8127-e443ba72fd79","Type":"ContainerDied","Data":"f96949970b4853cc28fa399e83faaafc1698778a2f6d4bc603c4fbe9bb7df9a9"} Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.179066 4860 scope.go:117] "RemoveContainer" containerID="0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.198515 4860 generic.go:334] "Generic (PLEG): container finished" podID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" containerID="9702247cd452c0d7c73b576140f73d08600c4139d3eae10e4928877250c5b34b" exitCode=0 Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.198583 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" event={"ID":"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb","Type":"ContainerDied","Data":"9702247cd452c0d7c73b576140f73d08600c4139d3eae10e4928877250c5b34b"} Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.229060 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.235306 4860 scope.go:117] "RemoveContainer" containerID="0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2" Jan 21 21:14:35 crc kubenswrapper[4860]: E0121 21:14:35.236174 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2\": container with ID starting with 0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2 not found: ID does not exist" containerID="0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.236271 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2"} err="failed to get container status \"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2\": rpc error: code = NotFound desc = could not find container \"0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2\": container with ID starting with 0f6c0759c59988ffcf605a0b757396af5d14c542635b76e749a2743b828397b2 not found: ID does not exist" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.255921 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca\") pod \"3077cd96-27db-40f8-8127-e443ba72fd79\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.256119 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert\") pod \"3077cd96-27db-40f8-8127-e443ba72fd79\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.256164 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlks\" (UniqueName: \"kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks\") pod \"3077cd96-27db-40f8-8127-e443ba72fd79\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.256221 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config\") pod \"3077cd96-27db-40f8-8127-e443ba72fd79\" (UID: \"3077cd96-27db-40f8-8127-e443ba72fd79\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.256738 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca" (OuterVolumeSpecName: "client-ca") pod "3077cd96-27db-40f8-8127-e443ba72fd79" (UID: "3077cd96-27db-40f8-8127-e443ba72fd79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.257094 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config" (OuterVolumeSpecName: "config") pod "3077cd96-27db-40f8-8127-e443ba72fd79" (UID: "3077cd96-27db-40f8-8127-e443ba72fd79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.257277 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.257298 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3077cd96-27db-40f8-8127-e443ba72fd79-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.262843 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3077cd96-27db-40f8-8127-e443ba72fd79" (UID: "3077cd96-27db-40f8-8127-e443ba72fd79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.262895 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks" (OuterVolumeSpecName: "kube-api-access-jxlks") pod "3077cd96-27db-40f8-8127-e443ba72fd79" (UID: "3077cd96-27db-40f8-8127-e443ba72fd79"). InnerVolumeSpecName "kube-api-access-jxlks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.357855 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52kz5\" (UniqueName: \"kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5\") pod \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.357971 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert\") pod \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358002 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca\") pod \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358049 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles\") pod \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358137 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config\") pod \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\" (UID: \"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb\") " Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358454 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3077cd96-27db-40f8-8127-e443ba72fd79-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358474 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlks\" (UniqueName: \"kubernetes.io/projected/3077cd96-27db-40f8-8127-e443ba72fd79-kube-api-access-jxlks\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358842 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca" (OuterVolumeSpecName: "client-ca") pod "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" (UID: "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.358847 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" (UID: "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.359046 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config" (OuterVolumeSpecName: "config") pod "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" (UID: "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.361333 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" (UID: "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.361498 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5" (OuterVolumeSpecName: "kube-api-access-52kz5") pod "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" (UID: "369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb"). InnerVolumeSpecName "kube-api-access-52kz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.460145 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.460204 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.460218 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52kz5\" (UniqueName: \"kubernetes.io/projected/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-kube-api-access-52kz5\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.460229 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.460237 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.504694 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:35 crc kubenswrapper[4860]: I0121 21:14:35.508999 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-g2xqm"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.126777 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:14:36 crc kubenswrapper[4860]: E0121 21:14:36.127549 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3077cd96-27db-40f8-8127-e443ba72fd79" containerName="route-controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.127594 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3077cd96-27db-40f8-8127-e443ba72fd79" containerName="route-controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: E0121 21:14:36.127654 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" containerName="controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.127663 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" containerName="controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.127907 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3077cd96-27db-40f8-8127-e443ba72fd79" containerName="route-controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.127994 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" containerName="controller-manager" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.129214 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.130829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.132006 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.132459 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.132611 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.132568 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.132885 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.133372 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.134905 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.144128 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.150195 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.211552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" event={"ID":"369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb","Type":"ContainerDied","Data":"f235abc9974f6bd2d499a9ee5690933fc38ad959a3698ecf6377e1a915ce29cd"} Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.211651 4860 scope.go:117] "RemoveContainer" containerID="9702247cd452c0d7c73b576140f73d08600c4139d3eae10e4928877250c5b34b" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.212071 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.251958 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.255198 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-9zmbb"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.275953 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5xb\" (UniqueName: \"kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276135 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276162 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276223 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276272 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276321 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjhcx\" (UniqueName: \"kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.276538 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.377923 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378046 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378094 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5xb\" (UniqueName: \"kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378133 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378160 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378196 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378236 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378266 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.378299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjhcx\" (UniqueName: \"kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.379717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.379857 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.380001 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.380201 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.380217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.383586 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.383608 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.406692 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjhcx\" (UniqueName: \"kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx\") pod \"route-controller-manager-7484d9ddcc-zz5v5\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.411755 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5xb\" (UniqueName: \"kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb\") pod \"controller-manager-5b85888b7c-ldl2c\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.455376 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.469681 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.606998 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3077cd96-27db-40f8-8127-e443ba72fd79" path="/var/lib/kubelet/pods/3077cd96-27db-40f8-8127-e443ba72fd79/volumes" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.608150 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb" path="/var/lib/kubelet/pods/369d1f81-d57b-4b2a-a0f9-ddfb9581dbfb/volumes" Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.907852 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:14:36 crc kubenswrapper[4860]: I0121 21:14:36.946965 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.219629 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" event={"ID":"3d23fe5e-4d30-4bde-a279-01290e2c7f44","Type":"ContainerStarted","Data":"3f8e11c52353d6476138b9062ae5a2100ce6e37a902cac7d89f659f0c6f89128"} Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.220118 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" event={"ID":"3d23fe5e-4d30-4bde-a279-01290e2c7f44","Type":"ContainerStarted","Data":"e8c5ead463b8d1a7245799d31919dd45414d32dde7c2773e2749cd80cdd6ff48"} Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.220146 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.222084 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" event={"ID":"291fcc44-1605-4cbe-89fb-907a287cb453","Type":"ContainerStarted","Data":"96bfd2a4e4ec84233e34a25c9dfce0cc89a21f4c08880ca39b80fb05d8db082a"} Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.222115 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" event={"ID":"291fcc44-1605-4cbe-89fb-907a287cb453","Type":"ContainerStarted","Data":"c3b5ba22ada2662358f1f0986ad169d37ff7cafbb71ed85135a8548292b23219"} Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.222284 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.225946 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.244453 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" podStartSLOduration=3.244413664 podStartE2EDuration="3.244413664s" podCreationTimestamp="2026-01-21 21:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:14:37.238500433 +0000 UTC m=+369.460678903" watchObservedRunningTime="2026-01-21 21:14:37.244413664 +0000 UTC m=+369.466592134" Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.294269 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" podStartSLOduration=3.294252791 podStartE2EDuration="3.294252791s" podCreationTimestamp="2026-01-21 21:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:14:37.292807334 +0000 UTC m=+369.514985804" watchObservedRunningTime="2026-01-21 21:14:37.294252791 +0000 UTC m=+369.516431261" Jan 21 21:14:37 crc kubenswrapper[4860]: I0121 21:14:37.451895 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:14:54 crc kubenswrapper[4860]: I0121 21:14:54.635703 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:54 crc kubenswrapper[4860]: I0121 21:14:54.636600 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" podUID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" containerName="controller-manager" containerID="cri-o://3f8e11c52353d6476138b9062ae5a2100ce6e37a902cac7d89f659f0c6f89128" gracePeriod=30 Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.369393 4860 generic.go:334] "Generic (PLEG): container finished" podID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" containerID="3f8e11c52353d6476138b9062ae5a2100ce6e37a902cac7d89f659f0c6f89128" exitCode=0 Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.369486 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" event={"ID":"3d23fe5e-4d30-4bde-a279-01290e2c7f44","Type":"ContainerDied","Data":"3f8e11c52353d6476138b9062ae5a2100ce6e37a902cac7d89f659f0c6f89128"} Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.563364 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.563678 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l87hr" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="registry-server" containerID="cri-o://142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c" gracePeriod=2 Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.753640 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.754048 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9dqdq" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="registry-server" containerID="cri-o://8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868" gracePeriod=2 Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.906448 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.926734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5xb\" (UniqueName: \"kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb\") pod \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.926807 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca\") pod \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.926860 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config\") pod \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.926885 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles\") pod \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.926947 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert\") pod \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\" (UID: \"3d23fe5e-4d30-4bde-a279-01290e2c7f44\") " Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.929003 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d23fe5e-4d30-4bde-a279-01290e2c7f44" (UID: "3d23fe5e-4d30-4bde-a279-01290e2c7f44"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.929057 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d23fe5e-4d30-4bde-a279-01290e2c7f44" (UID: "3d23fe5e-4d30-4bde-a279-01290e2c7f44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.929599 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config" (OuterVolumeSpecName: "config") pod "3d23fe5e-4d30-4bde-a279-01290e2c7f44" (UID: "3d23fe5e-4d30-4bde-a279-01290e2c7f44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.936249 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d23fe5e-4d30-4bde-a279-01290e2c7f44" (UID: "3d23fe5e-4d30-4bde-a279-01290e2c7f44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.939133 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb" (OuterVolumeSpecName: "kube-api-access-cv5xb") pod "3d23fe5e-4d30-4bde-a279-01290e2c7f44" (UID: "3d23fe5e-4d30-4bde-a279-01290e2c7f44"). InnerVolumeSpecName "kube-api-access-cv5xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.947041 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d"] Jan 21 21:14:55 crc kubenswrapper[4860]: E0121 21:14:55.947732 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" containerName="controller-manager" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.947757 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" containerName="controller-manager" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.947891 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" containerName="controller-manager" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.949919 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:55 crc kubenswrapper[4860]: I0121 21:14:55.950879 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.028313 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv5xb\" (UniqueName: \"kubernetes.io/projected/3d23fe5e-4d30-4bde-a279-01290e2c7f44-kube-api-access-cv5xb\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.028370 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.028397 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.028425 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d23fe5e-4d30-4bde-a279-01290e2c7f44-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.028460 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d23fe5e-4d30-4bde-a279-01290e2c7f44-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.040411 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.130516 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d1f4e0-d2a8-404a-8c25-93a8f2661841-serving-cert\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.130581 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-client-ca\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.130974 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5w2\" (UniqueName: \"kubernetes.io/projected/04d1f4e0-d2a8-404a-8c25-93a8f2661841-kube-api-access-tq5w2\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.131129 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-config\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.131218 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.190574 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232434 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content\") pod \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232598 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities\") pod \"c599eaed-fddf-4591-a474-f8c85a5470ae\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities\") pod \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232716 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f45rh\" (UniqueName: \"kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh\") pod \"c599eaed-fddf-4591-a474-f8c85a5470ae\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232767 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content\") pod \"c599eaed-fddf-4591-a474-f8c85a5470ae\" (UID: \"c599eaed-fddf-4591-a474-f8c85a5470ae\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.232807 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8j26\" (UniqueName: \"kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26\") pod \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\" (UID: \"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48\") " Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.233022 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d1f4e0-d2a8-404a-8c25-93a8f2661841-serving-cert\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.233060 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-client-ca\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.233099 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq5w2\" (UniqueName: \"kubernetes.io/projected/04d1f4e0-d2a8-404a-8c25-93a8f2661841-kube-api-access-tq5w2\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.233137 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-config\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.233199 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.234499 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities" (OuterVolumeSpecName: "utilities") pod "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" (UID: "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.234998 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities" (OuterVolumeSpecName: "utilities") pod "c599eaed-fddf-4591-a474-f8c85a5470ae" (UID: "c599eaed-fddf-4591-a474-f8c85a5470ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.235418 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-client-ca\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.237915 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh" (OuterVolumeSpecName: "kube-api-access-f45rh") pod "c599eaed-fddf-4591-a474-f8c85a5470ae" (UID: "c599eaed-fddf-4591-a474-f8c85a5470ae"). InnerVolumeSpecName "kube-api-access-f45rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.238313 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-config\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.239036 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04d1f4e0-d2a8-404a-8c25-93a8f2661841-proxy-ca-bundles\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.243437 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d1f4e0-d2a8-404a-8c25-93a8f2661841-serving-cert\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.243833 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26" (OuterVolumeSpecName: "kube-api-access-v8j26") pod "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" (UID: "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48"). InnerVolumeSpecName "kube-api-access-v8j26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.254841 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq5w2\" (UniqueName: \"kubernetes.io/projected/04d1f4e0-d2a8-404a-8c25-93a8f2661841-kube-api-access-tq5w2\") pod \"controller-manager-7cd856c7d8-xjg6d\" (UID: \"04d1f4e0-d2a8-404a-8c25-93a8f2661841\") " pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.281949 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c599eaed-fddf-4591-a474-f8c85a5470ae" (UID: "c599eaed-fddf-4591-a474-f8c85a5470ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.289394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" (UID: "f1a9e789-f7d5-4640-8ecf-4eef9aa31a48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.332623 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333730 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333764 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333774 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333785 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f45rh\" (UniqueName: \"kubernetes.io/projected/c599eaed-fddf-4591-a474-f8c85a5470ae-kube-api-access-f45rh\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333794 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c599eaed-fddf-4591-a474-f8c85a5470ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.333804 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8j26\" (UniqueName: \"kubernetes.io/projected/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48-kube-api-access-v8j26\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.380814 4860 generic.go:334] "Generic (PLEG): container finished" podID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerID="8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868" exitCode=0 Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.380944 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerDied","Data":"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868"} Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.380998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqdq" event={"ID":"f1a9e789-f7d5-4640-8ecf-4eef9aa31a48","Type":"ContainerDied","Data":"9447a8b5eba07ae23ab47e97151bf151a93222d9fc8eb714949dd8ef31b29368"} Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.381056 4860 scope.go:117] "RemoveContainer" containerID="8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.381271 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqdq" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.392070 4860 generic.go:334] "Generic (PLEG): container finished" podID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerID="142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c" exitCode=0 Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.392147 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l87hr" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.392153 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerDied","Data":"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c"} Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.392266 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l87hr" event={"ID":"c599eaed-fddf-4591-a474-f8c85a5470ae","Type":"ContainerDied","Data":"33668f061e3a7d7f3520dbefb7f2fd8eb7df281d6440d1e898b9492880754a87"} Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.394680 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" event={"ID":"3d23fe5e-4d30-4bde-a279-01290e2c7f44","Type":"ContainerDied","Data":"e8c5ead463b8d1a7245799d31919dd45414d32dde7c2773e2749cd80cdd6ff48"} Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.394778 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-ldl2c" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.435783 4860 scope.go:117] "RemoveContainer" containerID="891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.441377 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.461553 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9dqdq"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.480070 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.487242 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l87hr"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.487335 4860 scope.go:117] "RemoveContainer" containerID="65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.490407 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.493296 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-ldl2c"] Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.508446 4860 scope.go:117] "RemoveContainer" containerID="8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.509403 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868\": container with ID starting with 8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868 not found: ID does not exist" containerID="8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.509467 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868"} err="failed to get container status \"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868\": rpc error: code = NotFound desc = could not find container \"8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868\": container with ID starting with 8848e807f4150c70013bab0177c7e234bd03faf7cb776779e9da0f3521bb1868 not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.509507 4860 scope.go:117] "RemoveContainer" containerID="891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.509956 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a\": container with ID starting with 891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a not found: ID does not exist" containerID="891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.509981 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a"} err="failed to get container status \"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a\": rpc error: code = NotFound desc = could not find container \"891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a\": container with ID starting with 891b2f1e0f32392e53bab1feda36b6b169c97f4f72ce169df3e832135acba54a not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.509995 4860 scope.go:117] "RemoveContainer" containerID="65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.519725 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8\": container with ID starting with 65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8 not found: ID does not exist" containerID="65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.519804 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8"} err="failed to get container status \"65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8\": rpc error: code = NotFound desc = could not find container \"65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8\": container with ID starting with 65b45a23e03d63d4c192c378da99142f29998f25b6ebf463c9ca378f4195bae8 not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.519851 4860 scope.go:117] "RemoveContainer" containerID="142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.542337 4860 scope.go:117] "RemoveContainer" containerID="144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.562208 4860 scope.go:117] "RemoveContainer" containerID="78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.578377 4860 scope.go:117] "RemoveContainer" containerID="142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.578796 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c\": container with ID starting with 142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c not found: ID does not exist" containerID="142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.578846 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c"} err="failed to get container status \"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c\": rpc error: code = NotFound desc = could not find container \"142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c\": container with ID starting with 142aebd23bdac10ded93eddce89ae5e693a24e3f5321b140193593b3b35c3c1c not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.578879 4860 scope.go:117] "RemoveContainer" containerID="144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.579325 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66\": container with ID starting with 144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66 not found: ID does not exist" containerID="144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.579378 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66"} err="failed to get container status \"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66\": rpc error: code = NotFound desc = could not find container \"144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66\": container with ID starting with 144efd5945472dc66645c4c444a6cd762ca0e6a0ccee3762da2b4dbd8f766e66 not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.579414 4860 scope.go:117] "RemoveContainer" containerID="78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7" Jan 21 21:14:56 crc kubenswrapper[4860]: E0121 21:14:56.579946 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7\": container with ID starting with 78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7 not found: ID does not exist" containerID="78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.579986 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7"} err="failed to get container status \"78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7\": rpc error: code = NotFound desc = could not find container \"78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7\": container with ID starting with 78b8d6f969ebeae0edd3eecfface32ae9306968128973035c5099bee50ac6aa7 not found: ID does not exist" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.580007 4860 scope.go:117] "RemoveContainer" containerID="3f8e11c52353d6476138b9062ae5a2100ce6e37a902cac7d89f659f0c6f89128" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.587659 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d23fe5e-4d30-4bde-a279-01290e2c7f44" path="/var/lib/kubelet/pods/3d23fe5e-4d30-4bde-a279-01290e2c7f44/volumes" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.588579 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" path="/var/lib/kubelet/pods/c599eaed-fddf-4591-a474-f8c85a5470ae/volumes" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.589499 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" path="/var/lib/kubelet/pods/f1a9e789-f7d5-4640-8ecf-4eef9aa31a48/volumes" Jan 21 21:14:56 crc kubenswrapper[4860]: I0121 21:14:56.795305 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d"] Jan 21 21:14:56 crc kubenswrapper[4860]: W0121 21:14:56.799804 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d1f4e0_d2a8_404a_8c25_93a8f2661841.slice/crio-8f6ac7fc80dce036b6dc358a7b1c1d8867d389de04f9f6c08f836da1f7c167bd WatchSource:0}: Error finding container 8f6ac7fc80dce036b6dc358a7b1c1d8867d389de04f9f6c08f836da1f7c167bd: Status 404 returned error can't find the container with id 8f6ac7fc80dce036b6dc358a7b1c1d8867d389de04f9f6c08f836da1f7c167bd Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.404369 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" event={"ID":"04d1f4e0-d2a8-404a-8c25-93a8f2661841","Type":"ContainerStarted","Data":"715c6bb8ca17aa07f23e00aca0337aedced5c74c0f3818241190298faed88093"} Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.404417 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" event={"ID":"04d1f4e0-d2a8-404a-8c25-93a8f2661841","Type":"ContainerStarted","Data":"8f6ac7fc80dce036b6dc358a7b1c1d8867d389de04f9f6c08f836da1f7c167bd"} Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.405775 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.410292 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.426269 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd856c7d8-xjg6d" podStartSLOduration=3.426253659 podStartE2EDuration="3.426253659s" podCreationTimestamp="2026-01-21 21:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:14:57.425644 +0000 UTC m=+389.647822470" watchObservedRunningTime="2026-01-21 21:14:57.426253659 +0000 UTC m=+389.648432129" Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.956641 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:14:57 crc kubenswrapper[4860]: I0121 21:14:57.957612 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zh97n" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="registry-server" containerID="cri-o://7fa200e9fdb67b419359ca9a7acea43911dabcd0955dc4edcf79d45a70177866" gracePeriod=2 Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.150515 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.150815 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9rgh9" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="registry-server" containerID="cri-o://eb18c30d9d4e28b2996d4dbd0c3bc5c047237a26f1e8fb1dfda892239d53c904" gracePeriod=2 Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.422753 4860 generic.go:334] "Generic (PLEG): container finished" podID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerID="7fa200e9fdb67b419359ca9a7acea43911dabcd0955dc4edcf79d45a70177866" exitCode=0 Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.422864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerDied","Data":"7fa200e9fdb67b419359ca9a7acea43911dabcd0955dc4edcf79d45a70177866"} Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.424887 4860 generic.go:334] "Generic (PLEG): container finished" podID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerID="eb18c30d9d4e28b2996d4dbd0c3bc5c047237a26f1e8fb1dfda892239d53c904" exitCode=0 Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.424996 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerDied","Data":"eb18c30d9d4e28b2996d4dbd0c3bc5c047237a26f1e8fb1dfda892239d53c904"} Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.582020 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.771757 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content\") pod \"41129b4d-292c-46eb-807b-ed0c56b43c9b\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.771845 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckxnr\" (UniqueName: \"kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr\") pod \"41129b4d-292c-46eb-807b-ed0c56b43c9b\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.771885 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities\") pod \"41129b4d-292c-46eb-807b-ed0c56b43c9b\" (UID: \"41129b4d-292c-46eb-807b-ed0c56b43c9b\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.773054 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities" (OuterVolumeSpecName: "utilities") pod "41129b4d-292c-46eb-807b-ed0c56b43c9b" (UID: "41129b4d-292c-46eb-807b-ed0c56b43c9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.781534 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr" (OuterVolumeSpecName: "kube-api-access-ckxnr") pod "41129b4d-292c-46eb-807b-ed0c56b43c9b" (UID: "41129b4d-292c-46eb-807b-ed0c56b43c9b"). InnerVolumeSpecName "kube-api-access-ckxnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.842396 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.874231 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckxnr\" (UniqueName: \"kubernetes.io/projected/41129b4d-292c-46eb-807b-ed0c56b43c9b-kube-api-access-ckxnr\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.874273 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.897274 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41129b4d-292c-46eb-807b-ed0c56b43c9b" (UID: "41129b4d-292c-46eb-807b-ed0c56b43c9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.975553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvkv\" (UniqueName: \"kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv\") pod \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.975822 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content\") pod \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.975878 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities\") pod \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\" (UID: \"6d731289-0564-4ea3-a2ea-c19c361c0d3e\") " Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.976248 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41129b4d-292c-46eb-807b-ed0c56b43c9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.977252 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities" (OuterVolumeSpecName: "utilities") pod "6d731289-0564-4ea3-a2ea-c19c361c0d3e" (UID: "6d731289-0564-4ea3-a2ea-c19c361c0d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.980153 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv" (OuterVolumeSpecName: "kube-api-access-2bvkv") pod "6d731289-0564-4ea3-a2ea-c19c361c0d3e" (UID: "6d731289-0564-4ea3-a2ea-c19c361c0d3e"). InnerVolumeSpecName "kube-api-access-2bvkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:14:58 crc kubenswrapper[4860]: I0121 21:14:58.995760 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d731289-0564-4ea3-a2ea-c19c361c0d3e" (UID: "6d731289-0564-4ea3-a2ea-c19c361c0d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.078964 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.079032 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d731289-0564-4ea3-a2ea-c19c361c0d3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.079054 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvkv\" (UniqueName: \"kubernetes.io/projected/6d731289-0564-4ea3-a2ea-c19c361c0d3e-kube-api-access-2bvkv\") on node \"crc\" DevicePath \"\"" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.434314 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh97n" event={"ID":"6d731289-0564-4ea3-a2ea-c19c361c0d3e","Type":"ContainerDied","Data":"19efae694f68181d86ce3d89348f13b1deada5710de0d20b482a4911c2fcf109"} Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.434384 4860 scope.go:117] "RemoveContainer" containerID="7fa200e9fdb67b419359ca9a7acea43911dabcd0955dc4edcf79d45a70177866" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.434525 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh97n" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.440294 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rgh9" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.441024 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rgh9" event={"ID":"41129b4d-292c-46eb-807b-ed0c56b43c9b","Type":"ContainerDied","Data":"97509cdd3c399d835da39d67052dd0926d985570657bd9b848c12417a142cc02"} Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.455200 4860 scope.go:117] "RemoveContainer" containerID="c496bfdd97fbe3b2368d98283d7ccf6fe05c0ef4cf0ff75ed49f4cca3dd1db0d" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.474723 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.498288 4860 scope.go:117] "RemoveContainer" containerID="feb6b85fed7542d666ccf71e8fc214698d13f740630bb0fd3b9d5ae3e0b63bb9" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.511376 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh97n"] Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.516687 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.522622 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9rgh9"] Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.531963 4860 scope.go:117] "RemoveContainer" containerID="eb18c30d9d4e28b2996d4dbd0c3bc5c047237a26f1e8fb1dfda892239d53c904" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.553721 4860 scope.go:117] "RemoveContainer" containerID="d5f015bafb58829f24dcf1f2a4bba53e99d5d391c44f4c0768c5f75809553329" Jan 21 21:14:59 crc kubenswrapper[4860]: I0121 21:14:59.577818 4860 scope.go:117] "RemoveContainer" containerID="7cfdeb424752ccd6efc6590ef947538480ba1681acfa81169d28673a38bbc24f" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.205858 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs"] Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206108 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206121 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206133 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206139 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206150 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206157 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206169 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206174 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206187 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206192 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206200 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206206 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206216 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206221 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206228 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206236 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206244 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206249 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206258 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206263 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206272 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206278 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="extract-utilities" Jan 21 21:15:00 crc kubenswrapper[4860]: E0121 21:15:00.206284 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206289 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="extract-content" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206381 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c599eaed-fddf-4591-a474-f8c85a5470ae" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206396 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206407 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206414 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a9e789-f7d5-4640-8ecf-4eef9aa31a48" containerName="registry-server" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.206837 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.209564 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.209812 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.221960 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs"] Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.319835 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.319958 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7f9\" (UniqueName: \"kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.320001 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.421959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7f9\" (UniqueName: \"kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.422048 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.422120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.423361 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.432078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.441725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7f9\" (UniqueName: \"kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9\") pod \"collect-profiles-29483835-2x7rs\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.525612 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.597552 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41129b4d-292c-46eb-807b-ed0c56b43c9b" path="/var/lib/kubelet/pods/41129b4d-292c-46eb-807b-ed0c56b43c9b/volumes" Jan 21 21:15:00 crc kubenswrapper[4860]: I0121 21:15:00.598710 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d731289-0564-4ea3-a2ea-c19c361c0d3e" path="/var/lib/kubelet/pods/6d731289-0564-4ea3-a2ea-c19c361c0d3e/volumes" Jan 21 21:15:01 crc kubenswrapper[4860]: I0121 21:15:01.181266 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs"] Jan 21 21:15:01 crc kubenswrapper[4860]: I0121 21:15:01.463264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" event={"ID":"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a","Type":"ContainerStarted","Data":"a6a1df94e7fce71982911853f7701b3f5bbcde0bf5bd5b62361a2d2a9da5ebbf"} Jan 21 21:15:01 crc kubenswrapper[4860]: I0121 21:15:01.463317 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" event={"ID":"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a","Type":"ContainerStarted","Data":"201571a6b3e6e37763193a0638350b08216d5f320e05b10aff65592c34ebbea9"} Jan 21 21:15:01 crc kubenswrapper[4860]: I0121 21:15:01.487731 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" podStartSLOduration=1.487711764 podStartE2EDuration="1.487711764s" podCreationTimestamp="2026-01-21 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:15:01.486130822 +0000 UTC m=+393.708309302" watchObservedRunningTime="2026-01-21 21:15:01.487711764 +0000 UTC m=+393.709890234" Jan 21 21:15:02 crc kubenswrapper[4860]: I0121 21:15:02.103165 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:15:02 crc kubenswrapper[4860]: I0121 21:15:02.103271 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:15:02 crc kubenswrapper[4860]: I0121 21:15:02.471411 4860 generic.go:334] "Generic (PLEG): container finished" podID="2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" containerID="a6a1df94e7fce71982911853f7701b3f5bbcde0bf5bd5b62361a2d2a9da5ebbf" exitCode=0 Jan 21 21:15:02 crc kubenswrapper[4860]: I0121 21:15:02.471473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" event={"ID":"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a","Type":"ContainerDied","Data":"a6a1df94e7fce71982911853f7701b3f5bbcde0bf5bd5b62361a2d2a9da5ebbf"} Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.896773 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.994548 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7f9\" (UniqueName: \"kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9\") pod \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.994604 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume\") pod \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.994672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume\") pod \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\" (UID: \"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a\") " Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.995304 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" (UID: "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:15:03 crc kubenswrapper[4860]: I0121 21:15:03.996870 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.002796 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9" (OuterVolumeSpecName: "kube-api-access-bk7f9") pod "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" (UID: "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a"). InnerVolumeSpecName "kube-api-access-bk7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.003186 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" (UID: "2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.098422 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.098461 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk7f9\" (UniqueName: \"kubernetes.io/projected/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a-kube-api-access-bk7f9\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.505132 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" event={"ID":"2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a","Type":"ContainerDied","Data":"201571a6b3e6e37763193a0638350b08216d5f320e05b10aff65592c34ebbea9"} Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.505186 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201571a6b3e6e37763193a0638350b08216d5f320e05b10aff65592c34ebbea9" Jan 21 21:15:04 crc kubenswrapper[4860]: I0121 21:15:04.505317 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.093988 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.094920 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" podUID="291fcc44-1605-4cbe-89fb-907a287cb453" containerName="route-controller-manager" containerID="cri-o://96bfd2a4e4ec84233e34a25c9dfce0cc89a21f4c08880ca39b80fb05d8db082a" gracePeriod=30 Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.574104 4860 generic.go:334] "Generic (PLEG): container finished" podID="291fcc44-1605-4cbe-89fb-907a287cb453" containerID="96bfd2a4e4ec84233e34a25c9dfce0cc89a21f4c08880ca39b80fb05d8db082a" exitCode=0 Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.574187 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" event={"ID":"291fcc44-1605-4cbe-89fb-907a287cb453","Type":"ContainerDied","Data":"96bfd2a4e4ec84233e34a25c9dfce0cc89a21f4c08880ca39b80fb05d8db082a"} Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.574561 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" event={"ID":"291fcc44-1605-4cbe-89fb-907a287cb453","Type":"ContainerDied","Data":"c3b5ba22ada2662358f1f0986ad169d37ff7cafbb71ed85135a8548292b23219"} Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.574587 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b5ba22ada2662358f1f0986ad169d37ff7cafbb71ed85135a8548292b23219" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.602544 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.614440 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjhcx\" (UniqueName: \"kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx\") pod \"291fcc44-1605-4cbe-89fb-907a287cb453\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.614510 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config\") pod \"291fcc44-1605-4cbe-89fb-907a287cb453\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.614531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca\") pod \"291fcc44-1605-4cbe-89fb-907a287cb453\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.617022 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca" (OuterVolumeSpecName: "client-ca") pod "291fcc44-1605-4cbe-89fb-907a287cb453" (UID: "291fcc44-1605-4cbe-89fb-907a287cb453"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.617148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config" (OuterVolumeSpecName: "config") pod "291fcc44-1605-4cbe-89fb-907a287cb453" (UID: "291fcc44-1605-4cbe-89fb-907a287cb453"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.624348 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx" (OuterVolumeSpecName: "kube-api-access-mjhcx") pod "291fcc44-1605-4cbe-89fb-907a287cb453" (UID: "291fcc44-1605-4cbe-89fb-907a287cb453"). InnerVolumeSpecName "kube-api-access-mjhcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.715928 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert\") pod \"291fcc44-1605-4cbe-89fb-907a287cb453\" (UID: \"291fcc44-1605-4cbe-89fb-907a287cb453\") " Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.716345 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjhcx\" (UniqueName: \"kubernetes.io/projected/291fcc44-1605-4cbe-89fb-907a287cb453-kube-api-access-mjhcx\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.716370 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.716385 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/291fcc44-1605-4cbe-89fb-907a287cb453-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.720220 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "291fcc44-1605-4cbe-89fb-907a287cb453" (UID: "291fcc44-1605-4cbe-89fb-907a287cb453"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:15:15 crc kubenswrapper[4860]: I0121 21:15:15.817358 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291fcc44-1605-4cbe-89fb-907a287cb453-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.239824 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq"] Jan 21 21:15:16 crc kubenswrapper[4860]: E0121 21:15:16.240156 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291fcc44-1605-4cbe-89fb-907a287cb453" containerName="route-controller-manager" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.240173 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="291fcc44-1605-4cbe-89fb-907a287cb453" containerName="route-controller-manager" Jan 21 21:15:16 crc kubenswrapper[4860]: E0121 21:15:16.240188 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" containerName="collect-profiles" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.240194 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" containerName="collect-profiles" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.240297 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="291fcc44-1605-4cbe-89fb-907a287cb453" containerName="route-controller-manager" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.240312 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" containerName="collect-profiles" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.240783 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.249250 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq"] Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.424792 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-config\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.424955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-serving-cert\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.424994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5c9c\" (UniqueName: \"kubernetes.io/projected/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-kube-api-access-h5c9c\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.425024 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-client-ca\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.526097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-config\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.526163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-serving-cert\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.526192 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5c9c\" (UniqueName: \"kubernetes.io/projected/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-kube-api-access-h5c9c\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.526217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-client-ca\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.527780 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-client-ca\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.528165 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-config\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.533515 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-serving-cert\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.560668 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5c9c\" (UniqueName: \"kubernetes.io/projected/de7ebbff-4dcf-48b3-9b5c-7e4dad945692-kube-api-access-h5c9c\") pod \"route-controller-manager-684f8df48d-b94bq\" (UID: \"de7ebbff-4dcf-48b3-9b5c-7e4dad945692\") " pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.580199 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5" Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.625888 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.630362 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-zz5v5"] Jan 21 21:15:16 crc kubenswrapper[4860]: I0121 21:15:16.857153 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.292925 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq"] Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.610996 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" event={"ID":"de7ebbff-4dcf-48b3-9b5c-7e4dad945692","Type":"ContainerStarted","Data":"82d7a000798dbfc110690e8513a96dfcbb9bb7b01989db410ec8dfcb0424d769"} Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.612005 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" event={"ID":"de7ebbff-4dcf-48b3-9b5c-7e4dad945692","Type":"ContainerStarted","Data":"439c8d55a4d94f722ae70e879e11349af514b6193ce2035513b2b384ee0f3d9c"} Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.612101 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.639973 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" podStartSLOduration=2.639903339 podStartE2EDuration="2.639903339s" podCreationTimestamp="2026-01-21 21:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:15:17.637534353 +0000 UTC m=+409.859712843" watchObservedRunningTime="2026-01-21 21:15:17.639903339 +0000 UTC m=+409.862081809" Jan 21 21:15:17 crc kubenswrapper[4860]: I0121 21:15:17.799055 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-684f8df48d-b94bq" Jan 21 21:15:18 crc kubenswrapper[4860]: I0121 21:15:18.585560 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291fcc44-1605-4cbe-89fb-907a287cb453" path="/var/lib/kubelet/pods/291fcc44-1605-4cbe-89fb-907a287cb453/volumes" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.243860 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pgx87"] Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.245591 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.263172 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pgx87"] Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347433 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-trusted-ca\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347458 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-bound-sa-token\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347477 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-tls\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347541 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347567 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-certificates\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.347595 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9bc\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-kube-api-access-5w9bc\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.371273 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.448374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-certificates\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.448826 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9bc\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-kube-api-access-5w9bc\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.448968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-trusted-ca\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.449150 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-bound-sa-token\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.449626 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.450698 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-tls\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.450829 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.450122 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-trusted-ca\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.450463 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-certificates\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.451174 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.457210 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-registry-tls\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.457829 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.465581 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-bound-sa-token\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.466421 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9bc\" (UniqueName: \"kubernetes.io/projected/a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a-kube-api-access-5w9bc\") pod \"image-registry-66df7c8f76-pgx87\" (UID: \"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a\") " pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:24 crc kubenswrapper[4860]: I0121 21:15:24.564280 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:25 crc kubenswrapper[4860]: I0121 21:15:25.072561 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pgx87"] Jan 21 21:15:25 crc kubenswrapper[4860]: I0121 21:15:25.673020 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" event={"ID":"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a","Type":"ContainerStarted","Data":"63e89a7c4303189c29ca6313d9583bac0a6dfad5c322ed498077a9f91a7cfd26"} Jan 21 21:15:25 crc kubenswrapper[4860]: I0121 21:15:25.673652 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" event={"ID":"a9b3c714-0fd0-49a4-87c3-3fa0c1e4a06a","Type":"ContainerStarted","Data":"ba4e6c339f96245dd9c0172d3859a1d3b0ccb76bc20cc0bceb256b7ac0414b27"} Jan 21 21:15:25 crc kubenswrapper[4860]: I0121 21:15:25.673687 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.562232 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" podStartSLOduration=6.562209322 podStartE2EDuration="6.562209322s" podCreationTimestamp="2026-01-21 21:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:15:25.698327214 +0000 UTC m=+417.920505684" watchObservedRunningTime="2026-01-21 21:15:30.562209322 +0000 UTC m=+422.784387792" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.565375 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.565645 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" containerID="cri-o://806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" gracePeriod=30 Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.590987 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.591045 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.591294 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" containerID="cri-o://2e69aaccd5778a7550f58faa704b75bfd4d2115a5492de9b43ac1edbedd4d3e3" gracePeriod=30 Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.592231 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m2slz" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="registry-server" containerID="cri-o://a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05" gracePeriod=30 Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.604513 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.604815 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z6kb9" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="registry-server" containerID="cri-o://36a0cbb2f58913b4fa90484a241250cca40140df5a07cbcffa1da4e09d72faf2" gracePeriod=30 Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.612361 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.612796 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ngmkj" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="registry-server" containerID="cri-o://5a73c9072c764ef54beed91bfc7fb402cc45f4f3004944a84444b31bb41a1d45" gracePeriod=30 Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.626701 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jl5x"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.628323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.641742 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jl5x"] Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.654670 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.654802 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.654849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hxj\" (UniqueName: \"kubernetes.io/projected/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-kube-api-access-84hxj\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.755963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.756051 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.756092 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hxj\" (UniqueName: \"kubernetes.io/projected/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-kube-api-access-84hxj\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.759551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.767808 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.776684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hxj\" (UniqueName: \"kubernetes.io/projected/dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87-kube-api-access-84hxj\") pod \"marketplace-operator-79b997595-2jl5x\" (UID: \"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:30 crc kubenswrapper[4860]: I0121 21:15:30.967256 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.390103 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jl5x"] Jan 21 21:15:31 crc kubenswrapper[4860]: W0121 21:15:31.398539 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcae5e6e_baa7_4ab5_8c8c_7d9d235e2c87.slice/crio-1c03d0b7e4858dde77c1c4809703bc0fe429c8c367a7acc557c8be1cd460da9a WatchSource:0}: Error finding container 1c03d0b7e4858dde77c1c4809703bc0fe429c8c367a7acc557c8be1cd460da9a: Status 404 returned error can't find the container with id 1c03d0b7e4858dde77c1c4809703bc0fe429c8c367a7acc557c8be1cd460da9a Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.629388 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.700746 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004 is running failed: container process not found" containerID="806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.714525 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004 is running failed: container process not found" containerID="806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.718488 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004 is running failed: container process not found" containerID="806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.718590 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-gzkdc" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.767250 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerID="5a73c9072c764ef54beed91bfc7fb402cc45f4f3004944a84444b31bb41a1d45" exitCode=0 Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.767366 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerDied","Data":"5a73c9072c764ef54beed91bfc7fb402cc45f4f3004944a84444b31bb41a1d45"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.769753 4860 generic.go:334] "Generic (PLEG): container finished" podID="adf72aac-c719-4347-824a-c033f4f3a240" containerID="a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05" exitCode=0 Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.769849 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerDied","Data":"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.769888 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2slz" event={"ID":"adf72aac-c719-4347-824a-c033f4f3a240","Type":"ContainerDied","Data":"a907df6b7c339dd2a27bc5c066f3a63aca09edbebe5efafb427cb9b27d667e29"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.769918 4860 scope.go:117] "RemoveContainer" containerID="a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.770099 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2slz" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.770772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk958\" (UniqueName: \"kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958\") pod \"adf72aac-c719-4347-824a-c033f4f3a240\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.770842 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities\") pod \"adf72aac-c719-4347-824a-c033f4f3a240\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.771011 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content\") pod \"adf72aac-c719-4347-824a-c033f4f3a240\" (UID: \"adf72aac-c719-4347-824a-c033f4f3a240\") " Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.772306 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities" (OuterVolumeSpecName: "utilities") pod "adf72aac-c719-4347-824a-c033f4f3a240" (UID: "adf72aac-c719-4347-824a-c033f4f3a240"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.788101 4860 generic.go:334] "Generic (PLEG): container finished" podID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerID="806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" exitCode=0 Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.788466 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerDied","Data":"806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.791700 4860 generic.go:334] "Generic (PLEG): container finished" podID="baea563c-2833-407f-9cfb-571b93350be2" containerID="2e69aaccd5778a7550f58faa704b75bfd4d2115a5492de9b43ac1edbedd4d3e3" exitCode=0 Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.791870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" event={"ID":"baea563c-2833-407f-9cfb-571b93350be2","Type":"ContainerDied","Data":"2e69aaccd5778a7550f58faa704b75bfd4d2115a5492de9b43ac1edbedd4d3e3"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.799685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" event={"ID":"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87","Type":"ContainerStarted","Data":"f665307df5e8925b828c2dc9980f681205fa6bd1eb1b627ed2c715936f774d15"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.799813 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" event={"ID":"dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87","Type":"ContainerStarted","Data":"1c03d0b7e4858dde77c1c4809703bc0fe429c8c367a7acc557c8be1cd460da9a"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.801359 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.798767 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958" (OuterVolumeSpecName: "kube-api-access-wk958") pod "adf72aac-c719-4347-824a-c033f4f3a240" (UID: "adf72aac-c719-4347-824a-c033f4f3a240"). InnerVolumeSpecName "kube-api-access-wk958". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.812302 4860 scope.go:117] "RemoveContainer" containerID="d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.812570 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2jl5x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.812667 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" podUID="dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.820310 4860 generic.go:334] "Generic (PLEG): container finished" podID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerID="36a0cbb2f58913b4fa90484a241250cca40140df5a07cbcffa1da4e09d72faf2" exitCode=0 Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.820378 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerDied","Data":"36a0cbb2f58913b4fa90484a241250cca40140df5a07cbcffa1da4e09d72faf2"} Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.833845 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" podStartSLOduration=1.833819466 podStartE2EDuration="1.833819466s" podCreationTimestamp="2026-01-21 21:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:15:31.833609069 +0000 UTC m=+424.055787539" watchObservedRunningTime="2026-01-21 21:15:31.833819466 +0000 UTC m=+424.055997936" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.851417 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adf72aac-c719-4347-824a-c033f4f3a240" (UID: "adf72aac-c719-4347-824a-c033f4f3a240"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.874393 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk958\" (UniqueName: \"kubernetes.io/projected/adf72aac-c719-4347-824a-c033f4f3a240-kube-api-access-wk958\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.874441 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.874456 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf72aac-c719-4347-824a-c033f4f3a240-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.884446 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.892696 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.906482 4860 scope.go:117] "RemoveContainer" containerID="6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.908030 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.920282 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.935040 4860 scope.go:117] "RemoveContainer" containerID="a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05" Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.939598 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05\": container with ID starting with a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05 not found: ID does not exist" containerID="a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.939675 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05"} err="failed to get container status \"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05\": rpc error: code = NotFound desc = could not find container \"a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05\": container with ID starting with a87521010d144118d3bc36fa9e67c0357cfee40ac52a9405023d7c2e91b28e05 not found: ID does not exist" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.939735 4860 scope.go:117] "RemoveContainer" containerID="d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f" Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.940278 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f\": container with ID starting with d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f not found: ID does not exist" containerID="d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.940353 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f"} err="failed to get container status \"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f\": rpc error: code = NotFound desc = could not find container \"d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f\": container with ID starting with d50d486004581cc5da63c6e02ff6f9ba1d0b597660b64f7e698cb2ef3f416f6f not found: ID does not exist" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.940400 4860 scope.go:117] "RemoveContainer" containerID="6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55" Jan 21 21:15:31 crc kubenswrapper[4860]: E0121 21:15:31.940808 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55\": container with ID starting with 6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55 not found: ID does not exist" containerID="6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55" Jan 21 21:15:31 crc kubenswrapper[4860]: I0121 21:15:31.940847 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55"} err="failed to get container status \"6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55\": rpc error: code = NotFound desc = could not find container \"6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55\": container with ID starting with 6c30850e489ee04e506be6ffef60f9c6cbd6982f7cf6897c8e3a45d2fdd05f55 not found: ID does not exist" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.078908 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca\") pod \"baea563c-2833-407f-9cfb-571b93350be2\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079075 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities\") pod \"ce35873b-5e42-4d33-9212-f78afae53fd0\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079149 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fxcf\" (UniqueName: \"kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf\") pod \"ce35873b-5e42-4d33-9212-f78afae53fd0\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079211 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmrc\" (UniqueName: \"kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc\") pod \"baea563c-2833-407f-9cfb-571b93350be2\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079291 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbsmz\" (UniqueName: \"kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz\") pod \"dda00c6f-b112-49c0-bef6-aa2770a1c323\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities\") pod \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079381 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities\") pod \"dda00c6f-b112-49c0-bef6-aa2770a1c323\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079422 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content\") pod \"ce35873b-5e42-4d33-9212-f78afae53fd0\" (UID: \"ce35873b-5e42-4d33-9212-f78afae53fd0\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079472 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content\") pod \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079484 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "baea563c-2833-407f-9cfb-571b93350be2" (UID: "baea563c-2833-407f-9cfb-571b93350be2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics\") pod \"baea563c-2833-407f-9cfb-571b93350be2\" (UID: \"baea563c-2833-407f-9cfb-571b93350be2\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content\") pod \"dda00c6f-b112-49c0-bef6-aa2770a1c323\" (UID: \"dda00c6f-b112-49c0-bef6-aa2770a1c323\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.079614 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pd7m\" (UniqueName: \"kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m\") pod \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\" (UID: \"a21cacfb-049f-48d8-8c5d-4ad7ee333834\") " Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.080999 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities" (OuterVolumeSpecName: "utilities") pod "ce35873b-5e42-4d33-9212-f78afae53fd0" (UID: "ce35873b-5e42-4d33-9212-f78afae53fd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.081377 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baea563c-2833-407f-9cfb-571b93350be2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.081413 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.082229 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities" (OuterVolumeSpecName: "utilities") pod "a21cacfb-049f-48d8-8c5d-4ad7ee333834" (UID: "a21cacfb-049f-48d8-8c5d-4ad7ee333834"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.082675 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities" (OuterVolumeSpecName: "utilities") pod "dda00c6f-b112-49c0-bef6-aa2770a1c323" (UID: "dda00c6f-b112-49c0-bef6-aa2770a1c323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.084728 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf" (OuterVolumeSpecName: "kube-api-access-5fxcf") pod "ce35873b-5e42-4d33-9212-f78afae53fd0" (UID: "ce35873b-5e42-4d33-9212-f78afae53fd0"). InnerVolumeSpecName "kube-api-access-5fxcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.085754 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m" (OuterVolumeSpecName: "kube-api-access-9pd7m") pod "a21cacfb-049f-48d8-8c5d-4ad7ee333834" (UID: "a21cacfb-049f-48d8-8c5d-4ad7ee333834"). InnerVolumeSpecName "kube-api-access-9pd7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.086606 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc" (OuterVolumeSpecName: "kube-api-access-jgmrc") pod "baea563c-2833-407f-9cfb-571b93350be2" (UID: "baea563c-2833-407f-9cfb-571b93350be2"). InnerVolumeSpecName "kube-api-access-jgmrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.086992 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "baea563c-2833-407f-9cfb-571b93350be2" (UID: "baea563c-2833-407f-9cfb-571b93350be2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.091344 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz" (OuterVolumeSpecName: "kube-api-access-rbsmz") pod "dda00c6f-b112-49c0-bef6-aa2770a1c323" (UID: "dda00c6f-b112-49c0-bef6-aa2770a1c323"). InnerVolumeSpecName "kube-api-access-rbsmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.105409 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.105487 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.105549 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.106425 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.106520 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287" gracePeriod=600 Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.117510 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.130080 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a21cacfb-049f-48d8-8c5d-4ad7ee333834" (UID: "a21cacfb-049f-48d8-8c5d-4ad7ee333834"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.131579 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m2slz"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.174549 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda00c6f-b112-49c0-bef6-aa2770a1c323" (UID: "dda00c6f-b112-49c0-bef6-aa2770a1c323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183478 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183551 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baea563c-2833-407f-9cfb-571b93350be2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183571 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183583 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pd7m\" (UniqueName: \"kubernetes.io/projected/a21cacfb-049f-48d8-8c5d-4ad7ee333834-kube-api-access-9pd7m\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183594 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fxcf\" (UniqueName: \"kubernetes.io/projected/ce35873b-5e42-4d33-9212-f78afae53fd0-kube-api-access-5fxcf\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183602 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmrc\" (UniqueName: \"kubernetes.io/projected/baea563c-2833-407f-9cfb-571b93350be2-kube-api-access-jgmrc\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183612 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbsmz\" (UniqueName: \"kubernetes.io/projected/dda00c6f-b112-49c0-bef6-aa2770a1c323-kube-api-access-rbsmz\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183622 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21cacfb-049f-48d8-8c5d-4ad7ee333834-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.183631 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda00c6f-b112-49c0-bef6-aa2770a1c323-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.220508 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce35873b-5e42-4d33-9212-f78afae53fd0" (UID: "ce35873b-5e42-4d33-9212-f78afae53fd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.285620 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce35873b-5e42-4d33-9212-f78afae53fd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.589753 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf72aac-c719-4347-824a-c033f4f3a240" path="/var/lib/kubelet/pods/adf72aac-c719-4347-824a-c033f4f3a240/volumes" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.828085 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngmkj" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.827999 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngmkj" event={"ID":"ce35873b-5e42-4d33-9212-f78afae53fd0","Type":"ContainerDied","Data":"3a418381e56aa97a219bbd5285a87baf8febedd034cbee4a453faffb2e7ea5e3"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.828427 4860 scope.go:117] "RemoveContainer" containerID="5a73c9072c764ef54beed91bfc7fb402cc45f4f3004944a84444b31bb41a1d45" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.834659 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287" exitCode=0 Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.834790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.834898 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.838330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkdc" event={"ID":"dda00c6f-b112-49c0-bef6-aa2770a1c323","Type":"ContainerDied","Data":"d3b70d219bc224cc60622f4e6c3c1eb5e8dd5081ffd68804c03913b88dcb00c6"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.838415 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkdc" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.840722 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.840711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k7nfg" event={"ID":"baea563c-2833-407f-9cfb-571b93350be2","Type":"ContainerDied","Data":"3c7964c6aec95df8c176ac8d06d54b52ebcc9966ffd8de8af978a218b6866c3c"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.846854 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6kb9" event={"ID":"a21cacfb-049f-48d8-8c5d-4ad7ee333834","Type":"ContainerDied","Data":"7dbfb2d0e8a210843fcefc935bac47fe884e62a474dd5846012b57516229b26a"} Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.847115 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6kb9" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.852607 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2jl5x" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.861539 4860 scope.go:117] "RemoveContainer" containerID="c951dbd71470121fe3731102993ef5ca99c731cf2887e3cb52f16ecea1a8eb47" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.877482 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.886080 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6kb9"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.895020 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.898893 4860 scope.go:117] "RemoveContainer" containerID="154316144c4eda081c33af65b6799f96f157906c09049060a8b2728261762015" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.908339 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ngmkj"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.912951 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.926130 4860 scope.go:117] "RemoveContainer" containerID="7319b8fc8b6e2295e29c62b4809611adef99a8a227963df32514bbbd402c8ac6" Jan 21 21:15:32 crc kubenswrapper[4860]: I0121 21:15:32.928331 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k7nfg"] Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.118516 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.132674 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzkdc"] Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.143366 4860 scope.go:117] "RemoveContainer" containerID="806fe153cce7baa54df7efb408b50ce5c465fc6cc5b60fc81d2364d9c35fe004" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.167359 4860 scope.go:117] "RemoveContainer" containerID="ff9ccf29e544762e6087e2e047187ef42010e13f462d96d7b6afa48d603ace68" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.185540 4860 scope.go:117] "RemoveContainer" containerID="989806eae179705dba7fbbdfa9c7525b7b01c16da6db88bd079fdea9a35925ba" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.205881 4860 scope.go:117] "RemoveContainer" containerID="2e69aaccd5778a7550f58faa704b75bfd4d2115a5492de9b43ac1edbedd4d3e3" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.231696 4860 scope.go:117] "RemoveContainer" containerID="36a0cbb2f58913b4fa90484a241250cca40140df5a07cbcffa1da4e09d72faf2" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.250816 4860 scope.go:117] "RemoveContainer" containerID="b12a09d957ec59cca97e2731908ec775dff8fd8b6a5ad5673ee1fb57bdb897c1" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.269862 4860 scope.go:117] "RemoveContainer" containerID="085ca0b03d683d05b469df1401edff73085906a273bb1d5f2723419b8737cad4" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365612 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4nd6"] Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365888 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365902 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365917 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365924 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365947 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365955 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365963 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365970 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365981 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.365988 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.365997 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366003 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366014 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366020 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366031 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366110 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366120 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366127 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366139 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366146 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="extract-utilities" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366155 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366161 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="extract-content" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366171 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366178 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: E0121 21:15:33.366187 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366193 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366288 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366300 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf72aac-c719-4347-824a-c033f4f3a240" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366309 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366322 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="baea563c-2833-407f-9cfb-571b93350be2" containerName="marketplace-operator" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.366343 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" containerName="registry-server" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.367098 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.371820 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.378626 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4nd6"] Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.504692 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-utilities\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.504771 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-catalog-content\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.504806 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n98qh\" (UniqueName: \"kubernetes.io/projected/7caba0ee-5c63-4f29-a763-d68278316c8c-kube-api-access-n98qh\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.606621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n98qh\" (UniqueName: \"kubernetes.io/projected/7caba0ee-5c63-4f29-a763-d68278316c8c-kube-api-access-n98qh\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.606765 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-utilities\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.606816 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-catalog-content\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.607547 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-catalog-content\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.607536 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caba0ee-5c63-4f29-a763-d68278316c8c-utilities\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.629662 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n98qh\" (UniqueName: \"kubernetes.io/projected/7caba0ee-5c63-4f29-a763-d68278316c8c-kube-api-access-n98qh\") pod \"community-operators-g4nd6\" (UID: \"7caba0ee-5c63-4f29-a763-d68278316c8c\") " pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:33 crc kubenswrapper[4860]: I0121 21:15:33.696121 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.150350 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4nd6"] Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.589191 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a21cacfb-049f-48d8-8c5d-4ad7ee333834" path="/var/lib/kubelet/pods/a21cacfb-049f-48d8-8c5d-4ad7ee333834/volumes" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.590795 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baea563c-2833-407f-9cfb-571b93350be2" path="/var/lib/kubelet/pods/baea563c-2833-407f-9cfb-571b93350be2/volumes" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.591473 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce35873b-5e42-4d33-9212-f78afae53fd0" path="/var/lib/kubelet/pods/ce35873b-5e42-4d33-9212-f78afae53fd0/volumes" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.592643 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda00c6f-b112-49c0-bef6-aa2770a1c323" path="/var/lib/kubelet/pods/dda00c6f-b112-49c0-bef6-aa2770a1c323/volumes" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.759814 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtc7j"] Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.761043 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.763451 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.777762 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtc7j"] Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.871099 4860 generic.go:334] "Generic (PLEG): container finished" podID="7caba0ee-5c63-4f29-a763-d68278316c8c" containerID="ce642f3bf37261ab3d3eb3113da9ef42b168fd662edfd2f78af46353d0e3cfbd" exitCode=0 Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.871238 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4nd6" event={"ID":"7caba0ee-5c63-4f29-a763-d68278316c8c","Type":"ContainerDied","Data":"ce642f3bf37261ab3d3eb3113da9ef42b168fd662edfd2f78af46353d0e3cfbd"} Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.871340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4nd6" event={"ID":"7caba0ee-5c63-4f29-a763-d68278316c8c","Type":"ContainerStarted","Data":"0ff1d4e8376e7d2b1ebc83ab97c72814b08fa3f9dbc8fb135dd922e30f0e2821"} Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.936002 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-utilities\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.936156 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxq4j\" (UniqueName: \"kubernetes.io/projected/e639198b-f128-4643-823a-f52afd19d43b-kube-api-access-nxq4j\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:34 crc kubenswrapper[4860]: I0121 21:15:34.936228 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-catalog-content\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.037853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-catalog-content\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.037993 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-utilities\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.038656 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-utilities\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.038736 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e639198b-f128-4643-823a-f52afd19d43b-catalog-content\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.038893 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxq4j\" (UniqueName: \"kubernetes.io/projected/e639198b-f128-4643-823a-f52afd19d43b-kube-api-access-nxq4j\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.072315 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxq4j\" (UniqueName: \"kubernetes.io/projected/e639198b-f128-4643-823a-f52afd19d43b-kube-api-access-nxq4j\") pod \"redhat-marketplace-wtc7j\" (UID: \"e639198b-f128-4643-823a-f52afd19d43b\") " pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.078607 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.524288 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtc7j"] Jan 21 21:15:35 crc kubenswrapper[4860]: W0121 21:15:35.542222 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode639198b_f128_4643_823a_f52afd19d43b.slice/crio-199bf701c33211296d42d3a6fe401147b5c00ea935ece69359ccece35ff5214e WatchSource:0}: Error finding container 199bf701c33211296d42d3a6fe401147b5c00ea935ece69359ccece35ff5214e: Status 404 returned error can't find the container with id 199bf701c33211296d42d3a6fe401147b5c00ea935ece69359ccece35ff5214e Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.762661 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mwvkt"] Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.764010 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.766685 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.783691 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwvkt"] Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.882337 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4nd6" event={"ID":"7caba0ee-5c63-4f29-a763-d68278316c8c","Type":"ContainerStarted","Data":"e29816ae8bbeb46f0cce2cb507e811b691b1ebcec5e82f6756bbbf0136a73226"} Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.885754 4860 generic.go:334] "Generic (PLEG): container finished" podID="e639198b-f128-4643-823a-f52afd19d43b" containerID="f9d8e8f5163da8ef228dae92d179e2abc5b74ed237ee36f12040c6b26265f1eb" exitCode=0 Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.885795 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtc7j" event={"ID":"e639198b-f128-4643-823a-f52afd19d43b","Type":"ContainerDied","Data":"f9d8e8f5163da8ef228dae92d179e2abc5b74ed237ee36f12040c6b26265f1eb"} Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.885829 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtc7j" event={"ID":"e639198b-f128-4643-823a-f52afd19d43b","Type":"ContainerStarted","Data":"199bf701c33211296d42d3a6fe401147b5c00ea935ece69359ccece35ff5214e"} Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.952059 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-utilities\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.952149 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-catalog-content\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:35 crc kubenswrapper[4860]: I0121 21:15:35.952189 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmgs\" (UniqueName: \"kubernetes.io/projected/859f5834-9350-48bf-9329-e20069b0613e-kube-api-access-7fmgs\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.054094 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-utilities\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.054153 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-catalog-content\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.054180 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmgs\" (UniqueName: \"kubernetes.io/projected/859f5834-9350-48bf-9329-e20069b0613e-kube-api-access-7fmgs\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.055258 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-utilities\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.055660 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859f5834-9350-48bf-9329-e20069b0613e-catalog-content\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.077135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmgs\" (UniqueName: \"kubernetes.io/projected/859f5834-9350-48bf-9329-e20069b0613e-kube-api-access-7fmgs\") pod \"redhat-operators-mwvkt\" (UID: \"859f5834-9350-48bf-9329-e20069b0613e\") " pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.094371 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.560854 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwvkt"] Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.895678 4860 generic.go:334] "Generic (PLEG): container finished" podID="7caba0ee-5c63-4f29-a763-d68278316c8c" containerID="e29816ae8bbeb46f0cce2cb507e811b691b1ebcec5e82f6756bbbf0136a73226" exitCode=0 Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.895825 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4nd6" event={"ID":"7caba0ee-5c63-4f29-a763-d68278316c8c","Type":"ContainerDied","Data":"e29816ae8bbeb46f0cce2cb507e811b691b1ebcec5e82f6756bbbf0136a73226"} Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.900595 4860 generic.go:334] "Generic (PLEG): container finished" podID="859f5834-9350-48bf-9329-e20069b0613e" containerID="f4d9635e96b299cd78b9ec5a0388b9cd5dd5a46643b5e3269f14529be3000467" exitCode=0 Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.900652 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwvkt" event={"ID":"859f5834-9350-48bf-9329-e20069b0613e","Type":"ContainerDied","Data":"f4d9635e96b299cd78b9ec5a0388b9cd5dd5a46643b5e3269f14529be3000467"} Jan 21 21:15:36 crc kubenswrapper[4860]: I0121 21:15:36.900682 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwvkt" event={"ID":"859f5834-9350-48bf-9329-e20069b0613e","Type":"ContainerStarted","Data":"070c64a6154f71dbb8e2d41cf9665072dfcf6e3473a6333b741ebbe93f0fa16b"} Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.167849 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bspqh"] Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.172086 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.175503 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.176876 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bspqh"] Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.272372 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-utilities\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.272433 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-catalog-content\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.272498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh24k\" (UniqueName: \"kubernetes.io/projected/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-kube-api-access-jh24k\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.374041 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh24k\" (UniqueName: \"kubernetes.io/projected/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-kube-api-access-jh24k\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.374334 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-utilities\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.374435 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-catalog-content\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.374980 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-utilities\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.375312 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-catalog-content\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.398847 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh24k\" (UniqueName: \"kubernetes.io/projected/b3ceb80d-2539-41b8-a472-6dc1a6bdee30-kube-api-access-jh24k\") pod \"certified-operators-bspqh\" (UID: \"b3ceb80d-2539-41b8-a472-6dc1a6bdee30\") " pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.531427 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.909131 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4nd6" event={"ID":"7caba0ee-5c63-4f29-a763-d68278316c8c","Type":"ContainerStarted","Data":"5f7d99989e0e6e0a9e485c0f48ce2b0e4d811f0d987c4b8f2b5563cb9cb0764d"} Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.912253 4860 generic.go:334] "Generic (PLEG): container finished" podID="e639198b-f128-4643-823a-f52afd19d43b" containerID="769e00ff11db18513df9884334535fc5b43a973312ec72cdc40d66fc3b8349bd" exitCode=0 Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.912404 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtc7j" event={"ID":"e639198b-f128-4643-823a-f52afd19d43b","Type":"ContainerDied","Data":"769e00ff11db18513df9884334535fc5b43a973312ec72cdc40d66fc3b8349bd"} Jan 21 21:15:37 crc kubenswrapper[4860]: I0121 21:15:37.958830 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4nd6" podStartSLOduration=2.525915023 podStartE2EDuration="4.958808529s" podCreationTimestamp="2026-01-21 21:15:33 +0000 UTC" firstStartedPulling="2026-01-21 21:15:34.872971515 +0000 UTC m=+427.095149985" lastFinishedPulling="2026-01-21 21:15:37.305865021 +0000 UTC m=+429.528043491" observedRunningTime="2026-01-21 21:15:37.933548398 +0000 UTC m=+430.155726878" watchObservedRunningTime="2026-01-21 21:15:37.958808529 +0000 UTC m=+430.180986999" Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.007975 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bspqh"] Jan 21 21:15:38 crc kubenswrapper[4860]: W0121 21:15:38.016010 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ceb80d_2539_41b8_a472_6dc1a6bdee30.slice/crio-3a0e9235343a4a65926c2ff84a1f85768afddfcba3170ecb32195773715cbbf6 WatchSource:0}: Error finding container 3a0e9235343a4a65926c2ff84a1f85768afddfcba3170ecb32195773715cbbf6: Status 404 returned error can't find the container with id 3a0e9235343a4a65926c2ff84a1f85768afddfcba3170ecb32195773715cbbf6 Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.922503 4860 generic.go:334] "Generic (PLEG): container finished" podID="b3ceb80d-2539-41b8-a472-6dc1a6bdee30" containerID="5fbd41a5bf17bf58b2bcca4ae93b5aaac7bdbd2867c192d26fb873b12b0db907" exitCode=0 Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.923540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bspqh" event={"ID":"b3ceb80d-2539-41b8-a472-6dc1a6bdee30","Type":"ContainerDied","Data":"5fbd41a5bf17bf58b2bcca4ae93b5aaac7bdbd2867c192d26fb873b12b0db907"} Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.923585 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bspqh" event={"ID":"b3ceb80d-2539-41b8-a472-6dc1a6bdee30","Type":"ContainerStarted","Data":"3a0e9235343a4a65926c2ff84a1f85768afddfcba3170ecb32195773715cbbf6"} Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.927831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtc7j" event={"ID":"e639198b-f128-4643-823a-f52afd19d43b","Type":"ContainerStarted","Data":"98a1b7575683310f486c28b372b691cf6365f880235e90a4dfd98006d3320236"} Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.931288 4860 generic.go:334] "Generic (PLEG): container finished" podID="859f5834-9350-48bf-9329-e20069b0613e" containerID="4843a2fe5341c7338e5bff874722ee6c129ddeda771af5276b8100418818a7c7" exitCode=0 Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.931369 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwvkt" event={"ID":"859f5834-9350-48bf-9329-e20069b0613e","Type":"ContainerDied","Data":"4843a2fe5341c7338e5bff874722ee6c129ddeda771af5276b8100418818a7c7"} Jan 21 21:15:38 crc kubenswrapper[4860]: I0121 21:15:38.989846 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtc7j" podStartSLOduration=2.3764507200000002 podStartE2EDuration="4.989815828s" podCreationTimestamp="2026-01-21 21:15:34 +0000 UTC" firstStartedPulling="2026-01-21 21:15:35.887800208 +0000 UTC m=+428.109978678" lastFinishedPulling="2026-01-21 21:15:38.501165316 +0000 UTC m=+430.723343786" observedRunningTime="2026-01-21 21:15:38.968538158 +0000 UTC m=+431.190716638" watchObservedRunningTime="2026-01-21 21:15:38.989815828 +0000 UTC m=+431.211994308" Jan 21 21:15:39 crc kubenswrapper[4860]: I0121 21:15:39.941219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bspqh" event={"ID":"b3ceb80d-2539-41b8-a472-6dc1a6bdee30","Type":"ContainerStarted","Data":"650d750e64419984a64d2d02a7908e49df24822691130bd54d0449aff73360f5"} Jan 21 21:15:39 crc kubenswrapper[4860]: I0121 21:15:39.944685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwvkt" event={"ID":"859f5834-9350-48bf-9329-e20069b0613e","Type":"ContainerStarted","Data":"1be14b1d800cf67c479516e4379970ad86e588c6a0666add222d0eedfd110eac"} Jan 21 21:15:39 crc kubenswrapper[4860]: I0121 21:15:39.985979 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mwvkt" podStartSLOduration=2.580504412 podStartE2EDuration="4.985955425s" podCreationTimestamp="2026-01-21 21:15:35 +0000 UTC" firstStartedPulling="2026-01-21 21:15:36.90531056 +0000 UTC m=+429.127489030" lastFinishedPulling="2026-01-21 21:15:39.310761573 +0000 UTC m=+431.532940043" observedRunningTime="2026-01-21 21:15:39.982417681 +0000 UTC m=+432.204596151" watchObservedRunningTime="2026-01-21 21:15:39.985955425 +0000 UTC m=+432.208133895" Jan 21 21:15:40 crc kubenswrapper[4860]: I0121 21:15:40.954060 4860 generic.go:334] "Generic (PLEG): container finished" podID="b3ceb80d-2539-41b8-a472-6dc1a6bdee30" containerID="650d750e64419984a64d2d02a7908e49df24822691130bd54d0449aff73360f5" exitCode=0 Jan 21 21:15:40 crc kubenswrapper[4860]: I0121 21:15:40.955074 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bspqh" event={"ID":"b3ceb80d-2539-41b8-a472-6dc1a6bdee30","Type":"ContainerDied","Data":"650d750e64419984a64d2d02a7908e49df24822691130bd54d0449aff73360f5"} Jan 21 21:15:43 crc kubenswrapper[4860]: I0121 21:15:43.697375 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:43 crc kubenswrapper[4860]: I0121 21:15:43.698157 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:43 crc kubenswrapper[4860]: I0121 21:15:43.749188 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:43 crc kubenswrapper[4860]: I0121 21:15:43.974475 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bspqh" event={"ID":"b3ceb80d-2539-41b8-a472-6dc1a6bdee30","Type":"ContainerStarted","Data":"68fbb6b9d139ab47f583dfed197973338ff85a63164cc3c1153dd603c069b0b8"} Jan 21 21:15:43 crc kubenswrapper[4860]: I0121 21:15:43.997281 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bspqh" podStartSLOduration=4.206842942 podStartE2EDuration="6.99725981s" podCreationTimestamp="2026-01-21 21:15:37 +0000 UTC" firstStartedPulling="2026-01-21 21:15:38.925034174 +0000 UTC m=+431.147212634" lastFinishedPulling="2026-01-21 21:15:41.715451032 +0000 UTC m=+433.937629502" observedRunningTime="2026-01-21 21:15:43.995516204 +0000 UTC m=+436.217694684" watchObservedRunningTime="2026-01-21 21:15:43.99725981 +0000 UTC m=+436.219438280" Jan 21 21:15:44 crc kubenswrapper[4860]: I0121 21:15:44.024543 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4nd6" Jan 21 21:15:44 crc kubenswrapper[4860]: I0121 21:15:44.570808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pgx87" Jan 21 21:15:44 crc kubenswrapper[4860]: I0121 21:15:44.633041 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:15:45 crc kubenswrapper[4860]: I0121 21:15:45.079056 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:45 crc kubenswrapper[4860]: I0121 21:15:45.079510 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:45 crc kubenswrapper[4860]: I0121 21:15:45.122766 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:46 crc kubenswrapper[4860]: I0121 21:15:46.032282 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtc7j" Jan 21 21:15:46 crc kubenswrapper[4860]: I0121 21:15:46.094844 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:46 crc kubenswrapper[4860]: I0121 21:15:46.094922 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:46 crc kubenswrapper[4860]: I0121 21:15:46.135819 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:47 crc kubenswrapper[4860]: I0121 21:15:47.037718 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mwvkt" Jan 21 21:15:47 crc kubenswrapper[4860]: I0121 21:15:47.531661 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:47 crc kubenswrapper[4860]: I0121 21:15:47.531866 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:47 crc kubenswrapper[4860]: I0121 21:15:47.626400 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:15:57 crc kubenswrapper[4860]: I0121 21:15:57.589561 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bspqh" Jan 21 21:16:09 crc kubenswrapper[4860]: I0121 21:16:09.684337 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" podUID="3ce6d0d8-ad17-4129-801d-508640c3419a" containerName="registry" containerID="cri-o://856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0" gracePeriod=30 Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.264752 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.275058 4860 generic.go:334] "Generic (PLEG): container finished" podID="3ce6d0d8-ad17-4129-801d-508640c3419a" containerID="856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0" exitCode=0 Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.275156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" event={"ID":"3ce6d0d8-ad17-4129-801d-508640c3419a","Type":"ContainerDied","Data":"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0"} Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.275217 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" event={"ID":"3ce6d0d8-ad17-4129-801d-508640c3419a","Type":"ContainerDied","Data":"9b0847d461027b55b0a3f637033837506ac9a1a608bc61b462212425d7f7241a"} Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.275294 4860 scope.go:117] "RemoveContainer" containerID="856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.275680 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nsjpv" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.305213 4860 scope.go:117] "RemoveContainer" containerID="856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0" Jan 21 21:16:10 crc kubenswrapper[4860]: E0121 21:16:10.306188 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0\": container with ID starting with 856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0 not found: ID does not exist" containerID="856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.306241 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0"} err="failed to get container status \"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0\": rpc error: code = NotFound desc = could not find container \"856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0\": container with ID starting with 856c583ce8268b930c7543332def87d8fda8d17bae5915d9646dd6470cff9ef0 not found: ID does not exist" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.451795 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.451897 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.451994 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66z9l\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.452127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.452858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.452962 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.453030 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.453094 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.454652 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.454979 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.464445 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.464585 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l" (OuterVolumeSpecName: "kube-api-access-66z9l") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "kube-api-access-66z9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: E0121 21:16:10.464993 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:3ce6d0d8-ad17-4129-801d-508640c3419a nodeName:}" failed. No retries permitted until 2026-01-21 21:16:10.964876252 +0000 UTC m=+463.187054722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "registry-storage" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.469638 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.470752 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.486393 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555488 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ce6d0d8-ad17-4129-801d-508640c3419a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555579 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555600 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555635 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66z9l\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-kube-api-access-66z9l\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555658 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ce6d0d8-ad17-4129-801d-508640c3419a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555675 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ce6d0d8-ad17-4129-801d-508640c3419a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:10 crc kubenswrapper[4860]: I0121 21:16:10.555695 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ce6d0d8-ad17-4129-801d-508640c3419a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 21:16:11 crc kubenswrapper[4860]: I0121 21:16:11.066156 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3ce6d0d8-ad17-4129-801d-508640c3419a\" (UID: \"3ce6d0d8-ad17-4129-801d-508640c3419a\") " Jan 21 21:16:11 crc kubenswrapper[4860]: I0121 21:16:11.082229 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3ce6d0d8-ad17-4129-801d-508640c3419a" (UID: "3ce6d0d8-ad17-4129-801d-508640c3419a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 21:16:11 crc kubenswrapper[4860]: I0121 21:16:11.230486 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:16:11 crc kubenswrapper[4860]: I0121 21:16:11.238132 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nsjpv"] Jan 21 21:16:12 crc kubenswrapper[4860]: I0121 21:16:12.587704 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce6d0d8-ad17-4129-801d-508640c3419a" path="/var/lib/kubelet/pods/3ce6d0d8-ad17-4129-801d-508640c3419a/volumes" Jan 21 21:17:32 crc kubenswrapper[4860]: I0121 21:17:32.103766 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:17:32 crc kubenswrapper[4860]: I0121 21:17:32.105131 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:18:02 crc kubenswrapper[4860]: I0121 21:18:02.103567 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:18:02 crc kubenswrapper[4860]: I0121 21:18:02.105074 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.103632 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.105172 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.105344 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.106369 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.106563 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8" gracePeriod=600 Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.272984 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8" exitCode=0 Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.273045 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8"} Jan 21 21:18:32 crc kubenswrapper[4860]: I0121 21:18:32.273303 4860 scope.go:117] "RemoveContainer" containerID="3b65df24bc6ea2dc841321cae48e22a15ad8f9a2859950e88c8846162091f287" Jan 21 21:18:33 crc kubenswrapper[4860]: I0121 21:18:33.282144 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f"} Jan 21 21:20:32 crc kubenswrapper[4860]: I0121 21:20:32.103800 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:20:32 crc kubenswrapper[4860]: I0121 21:20:32.104714 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:21:02 crc kubenswrapper[4860]: I0121 21:21:02.104260 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:21:02 crc kubenswrapper[4860]: I0121 21:21:02.105324 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:21:29 crc kubenswrapper[4860]: I0121 21:21:29.328152 4860 scope.go:117] "RemoveContainer" containerID="96bfd2a4e4ec84233e34a25c9dfce0cc89a21f4c08880ca39b80fb05d8db082a" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.942255 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j"] Jan 21 21:21:30 crc kubenswrapper[4860]: E0121 21:21:30.942967 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce6d0d8-ad17-4129-801d-508640c3419a" containerName="registry" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.942993 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce6d0d8-ad17-4129-801d-508640c3419a" containerName="registry" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.943188 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce6d0d8-ad17-4129-801d-508640c3419a" containerName="registry" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.944298 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.948657 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 21:21:30 crc kubenswrapper[4860]: I0121 21:21:30.953398 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j"] Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.014695 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ggw\" (UniqueName: \"kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.014801 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.014857 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.116548 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ggw\" (UniqueName: \"kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.116675 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.116718 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.117649 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.117867 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.145306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ggw\" (UniqueName: \"kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.264761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:31 crc kubenswrapper[4860]: I0121 21:21:31.544406 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j"] Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.103909 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.104389 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.104512 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.105423 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.105525 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f" gracePeriod=600 Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.168259 4860 generic.go:334] "Generic (PLEG): container finished" podID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerID="d78014b71c3eb5c97423c5ad2f473ef39ebce284b51e2e979134e06f73c24dcc" exitCode=0 Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.168320 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" event={"ID":"910ee5e4-1afe-4f34-a512-fc390f5ce35a","Type":"ContainerDied","Data":"d78014b71c3eb5c97423c5ad2f473ef39ebce284b51e2e979134e06f73c24dcc"} Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.168361 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" event={"ID":"910ee5e4-1afe-4f34-a512-fc390f5ce35a","Type":"ContainerStarted","Data":"5fe8cfaa93a2b9d54d46f876df6e2dfe3a53cf5921e944ef995a24840bca1d0b"} Jan 21 21:21:32 crc kubenswrapper[4860]: I0121 21:21:32.172318 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.179470 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f" exitCode=0 Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.179506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f"} Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.180445 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e"} Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.180486 4860 scope.go:117] "RemoveContainer" containerID="981fe1e88982a08419f9f8e881fb2849f11febf5c3b56821d4dc8376c101a3c8" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.276372 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.281263 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.292874 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.347625 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfcv\" (UniqueName: \"kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.347714 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.347756 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.449057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfcv\" (UniqueName: \"kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.449513 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.449646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.450202 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.450305 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.508212 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfcv\" (UniqueName: \"kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv\") pod \"redhat-operators-mkf7m\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.599147 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:33 crc kubenswrapper[4860]: I0121 21:21:33.864846 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:33 crc kubenswrapper[4860]: W0121 21:21:33.900810 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5e05733_b570_4b0a_ba85_e08fef5b2f86.slice/crio-45cf7c6b8ab79b25ba7ceb58a9a3ed1c8877a67a50b90e93c12d723c122059b5 WatchSource:0}: Error finding container 45cf7c6b8ab79b25ba7ceb58a9a3ed1c8877a67a50b90e93c12d723c122059b5: Status 404 returned error can't find the container with id 45cf7c6b8ab79b25ba7ceb58a9a3ed1c8877a67a50b90e93c12d723c122059b5 Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.194323 4860 generic.go:334] "Generic (PLEG): container finished" podID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerID="fefac41a94d78ddc1ec3dcff23f4409908fc557846a0a4dd64416fca07451229" exitCode=0 Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.194417 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" event={"ID":"910ee5e4-1afe-4f34-a512-fc390f5ce35a","Type":"ContainerDied","Data":"fefac41a94d78ddc1ec3dcff23f4409908fc557846a0a4dd64416fca07451229"} Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.198726 4860 generic.go:334] "Generic (PLEG): container finished" podID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerID="72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be" exitCode=0 Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.198816 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerDied","Data":"72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be"} Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.198845 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerStarted","Data":"45cf7c6b8ab79b25ba7ceb58a9a3ed1c8877a67a50b90e93c12d723c122059b5"} Jan 21 21:21:34 crc kubenswrapper[4860]: I0121 21:21:34.758632 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 21:21:35 crc kubenswrapper[4860]: I0121 21:21:35.207129 4860 generic.go:334] "Generic (PLEG): container finished" podID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerID="7eaaddde3162bbb6448b874e3da757a68eff67b524d1ee9ae620cf0ed426ba0d" exitCode=0 Jan 21 21:21:35 crc kubenswrapper[4860]: I0121 21:21:35.207467 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" event={"ID":"910ee5e4-1afe-4f34-a512-fc390f5ce35a","Type":"ContainerDied","Data":"7eaaddde3162bbb6448b874e3da757a68eff67b524d1ee9ae620cf0ed426ba0d"} Jan 21 21:21:35 crc kubenswrapper[4860]: I0121 21:21:35.209204 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerStarted","Data":"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6"} Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.216829 4860 generic.go:334] "Generic (PLEG): container finished" podID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerID="2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6" exitCode=0 Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.216884 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerDied","Data":"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6"} Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.443260 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.611178 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle\") pod \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.611670 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6ggw\" (UniqueName: \"kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw\") pod \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.611771 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util\") pod \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\" (UID: \"910ee5e4-1afe-4f34-a512-fc390f5ce35a\") " Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.615997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle" (OuterVolumeSpecName: "bundle") pod "910ee5e4-1afe-4f34-a512-fc390f5ce35a" (UID: "910ee5e4-1afe-4f34-a512-fc390f5ce35a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.619178 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw" (OuterVolumeSpecName: "kube-api-access-g6ggw") pod "910ee5e4-1afe-4f34-a512-fc390f5ce35a" (UID: "910ee5e4-1afe-4f34-a512-fc390f5ce35a"). InnerVolumeSpecName "kube-api-access-g6ggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.629266 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util" (OuterVolumeSpecName: "util") pod "910ee5e4-1afe-4f34-a512-fc390f5ce35a" (UID: "910ee5e4-1afe-4f34-a512-fc390f5ce35a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.713402 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.713442 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/910ee5e4-1afe-4f34-a512-fc390f5ce35a-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:36 crc kubenswrapper[4860]: I0121 21:21:36.713452 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6ggw\" (UniqueName: \"kubernetes.io/projected/910ee5e4-1afe-4f34-a512-fc390f5ce35a-kube-api-access-g6ggw\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:37 crc kubenswrapper[4860]: I0121 21:21:37.226268 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" event={"ID":"910ee5e4-1afe-4f34-a512-fc390f5ce35a","Type":"ContainerDied","Data":"5fe8cfaa93a2b9d54d46f876df6e2dfe3a53cf5921e944ef995a24840bca1d0b"} Jan 21 21:21:37 crc kubenswrapper[4860]: I0121 21:21:37.226682 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe8cfaa93a2b9d54d46f876df6e2dfe3a53cf5921e944ef995a24840bca1d0b" Jan 21 21:21:37 crc kubenswrapper[4860]: I0121 21:21:37.226346 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j" Jan 21 21:21:37 crc kubenswrapper[4860]: I0121 21:21:37.228965 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerStarted","Data":"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af"} Jan 21 21:21:37 crc kubenswrapper[4860]: I0121 21:21:37.253518 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkf7m" podStartSLOduration=1.5705394209999999 podStartE2EDuration="4.253478875s" podCreationTimestamp="2026-01-21 21:21:33 +0000 UTC" firstStartedPulling="2026-01-21 21:21:34.200444274 +0000 UTC m=+786.422622744" lastFinishedPulling="2026-01-21 21:21:36.883383728 +0000 UTC m=+789.105562198" observedRunningTime="2026-01-21 21:21:37.252385649 +0000 UTC m=+789.474564119" watchObservedRunningTime="2026-01-21 21:21:37.253478875 +0000 UTC m=+789.475657345" Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.658496 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzw2c"] Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.659740 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-controller" containerID="cri-o://878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660121 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="northd" containerID="cri-o://c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660358 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="sbdb" containerID="cri-o://355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660379 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660460 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="nbdb" containerID="cri-o://920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660525 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-node" containerID="cri-o://07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.660495 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-acl-logging" containerID="cri-o://6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415" gracePeriod=30 Jan 21 21:21:40 crc kubenswrapper[4860]: I0121 21:21:40.718918 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" containerID="cri-o://21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" gracePeriod=30 Jan 21 21:21:42 crc kubenswrapper[4860]: E0121 21:21:42.162521 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a is running failed: container process not found" containerID="21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:21:42 crc kubenswrapper[4860]: E0121 21:21:42.210401 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a is running failed: container process not found" containerID="21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:21:42 crc kubenswrapper[4860]: E0121 21:21:42.211811 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a is running failed: container process not found" containerID="21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Jan 21 21:21:42 crc kubenswrapper[4860]: E0121 21:21:42.211867 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.346281 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/2.log" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.350750 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/1.log" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.350824 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2a7ca69-9cb5-41b5-9213-72165a9fc8e1" containerID="ad574bcd76cd727107043ba86bf21ea24269fefc8deb5e1cf8a15a01fe36fc4c" exitCode=2 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.350953 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerDied","Data":"ad574bcd76cd727107043ba86bf21ea24269fefc8deb5e1cf8a15a01fe36fc4c"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.351035 4860 scope.go:117] "RemoveContainer" containerID="ca77d0da8cec0e17e9814276bcc29ad55e2e3c909e3995bb0a3d6a971376f7be" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.351754 4860 scope.go:117] "RemoveContainer" containerID="ad574bcd76cd727107043ba86bf21ea24269fefc8deb5e1cf8a15a01fe36fc4c" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.358748 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovnkube-controller/3.log" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.361633 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-acl-logging/0.log" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.362231 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-controller/0.log" Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366155 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366187 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366196 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366204 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366212 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366222 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332" exitCode=0 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366230 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415" exitCode=143 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366238 4860 generic.go:334] "Generic (PLEG): container finished" podID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerID="878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec" exitCode=143 Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366246 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366313 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366349 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366358 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366368 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.366383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec"} Jan 21 21:21:42 crc kubenswrapper[4860]: I0121 21:21:42.411020 4860 scope.go:117] "RemoveContainer" containerID="8cf5eaf67fc5118db8f937fc087b9619b3f88ba597c88f88eb2262bca40efcf7" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.012877 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-acl-logging/0.log" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.013806 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-controller/0.log" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.014249 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033479 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033516 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033543 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033592 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033608 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033623 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033642 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033679 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033709 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033723 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033738 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033762 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033781 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033803 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033822 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033840 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tb7z\" (UniqueName: \"kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033883 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.033897 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd\") pod \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\" (UID: \"7976b0a1-a5f6-4aa6-86db-173e6342ff7f\") " Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.034112 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.034148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.034166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035029 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket" (OuterVolumeSpecName: "log-socket") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035064 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035083 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035100 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash" (OuterVolumeSpecName: "host-slash") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035118 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035139 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log" (OuterVolumeSpecName: "node-log") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035164 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035585 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035615 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035752 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035781 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035864 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.035912 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.050411 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z" (OuterVolumeSpecName: "kube-api-access-9tb7z") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "kube-api-access-9tb7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.050445 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.092510 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7976b0a1-a5f6-4aa6-86db-173e6342ff7f" (UID: "7976b0a1-a5f6-4aa6-86db-173e6342ff7f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135449 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135511 4860 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135522 4860 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135530 4860 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135540 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135549 4860 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135558 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tb7z\" (UniqueName: \"kubernetes.io/projected/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-kube-api-access-9tb7z\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135577 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135586 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135594 4860 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135602 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135611 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135619 4860 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135628 4860 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135639 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135647 4860 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135665 4860 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135673 4860 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135682 4860 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.135691 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7976b0a1-a5f6-4aa6-86db-173e6342ff7f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.247062 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nh8kb"] Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.248911 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.248934 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.248945 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.248967 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.248976 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="extract" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.248981 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="extract" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.248992 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="northd" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249000 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="northd" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249007 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249014 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249021 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249029 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249046 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="sbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249052 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="sbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249059 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kubecfg-setup" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249065 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kubecfg-setup" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249074 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="nbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249080 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="nbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249089 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="util" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249095 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="util" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249102 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-acl-logging" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249107 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-acl-logging" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249116 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="pull" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249122 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="pull" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249130 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-node" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249135 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-node" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249144 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249149 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249250 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249260 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-acl-logging" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249269 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249281 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249287 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="kube-rbac-proxy-node" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249294 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="nbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249300 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="sbdb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249308 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249315 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="northd" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249321 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="910ee5e4-1afe-4f34-a512-fc390f5ce35a" containerName="extract" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249330 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovn-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249426 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249435 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249542 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: E0121 21:21:43.249628 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249635 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.249750 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" containerName="ovnkube-controller" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.251336 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339057 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-node-log\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvnk\" (UniqueName: \"kubernetes.io/projected/d8ed94c4-8122-4ea4-8c07-47beb5960274-kube-api-access-xxvnk\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339131 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-netns\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339150 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-kubelet\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-ovn\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339248 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-slash\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339269 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-env-overrides\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339284 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-log-socket\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339306 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-var-lib-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339322 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-etc-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-systemd-units\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339355 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-script-lib\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339368 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339387 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339412 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-systemd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339459 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-bin\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339483 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-netd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339499 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-config\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.339518 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovn-node-metrics-cert\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.374800 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/2.log" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.374908 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s67xh" event={"ID":"e2a7ca69-9cb5-41b5-9213-72165a9fc8e1","Type":"ContainerStarted","Data":"a2c52106f649b96220cbb133d697d3f3d7895c88508b43aec212f6cdbe383221"} Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.379473 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-acl-logging/0.log" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.380036 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzw2c_7976b0a1-a5f6-4aa6-86db-173e6342ff7f/ovn-controller/0.log" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.380544 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" event={"ID":"7976b0a1-a5f6-4aa6-86db-173e6342ff7f","Type":"ContainerDied","Data":"3069b2106995569c530b9d4edeaba0910294dd0b467c5c90b178a6a8a7783873"} Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.380593 4860 scope.go:117] "RemoveContainer" containerID="21ba00d4e61f729776b647f4923cf7a7daeb92065eed86172f98a0344cc6b46a" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.380628 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzw2c" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.401808 4860 scope.go:117] "RemoveContainer" containerID="355e4b9b4da9338c53567fcb62c45a9b017b6a5015104cc00d1c25568be74105" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.416096 4860 scope.go:117] "RemoveContainer" containerID="920a5bc399b3224626943453fcb825f35ab360754eaea19edb1eff45a3e62bbd" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.438134 4860 scope.go:117] "RemoveContainer" containerID="c7f06236d1f2be49f3acb5a6edcd6861bf2f11fcc2459a86834878ac1d82b724" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442569 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-systemd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442625 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442649 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-bin\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442670 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-netd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442689 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-config\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442736 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-bin\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442761 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-cni-netd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovn-node-metrics-cert\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.442996 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-node-log\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443488 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-config\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443504 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-node-log\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443537 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxvnk\" (UniqueName: \"kubernetes.io/projected/d8ed94c4-8122-4ea4-8c07-47beb5960274-kube-api-access-xxvnk\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443569 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-netns\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443596 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-kubelet\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443644 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-ovn\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443667 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-slash\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443729 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-env-overrides\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443746 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-log-socket\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443774 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-etc-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-var-lib-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443810 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-systemd-units\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-script-lib\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443851 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443869 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.443994 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-ovn-kubernetes\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.444275 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-run-netns\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.444306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-kubelet\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.444325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-ovn\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.444346 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-host-slash\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446239 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-log-socket\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovn-node-metrics-cert\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446587 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-etc-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446670 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-var-lib-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-openvswitch\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.446726 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-systemd-units\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.447276 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-env-overrides\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.448474 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8ed94c4-8122-4ea4-8c07-47beb5960274-run-systemd\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.450502 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8ed94c4-8122-4ea4-8c07-47beb5960274-ovnkube-script-lib\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.481640 4860 scope.go:117] "RemoveContainer" containerID="8e171becd93987f8719c3ae94e8707454dc9bd42fe9ff095f4ab5fc4044ceb47" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.501510 4860 scope.go:117] "RemoveContainer" containerID="07abe7bab091e9c8aaa45e7d7574ba1917b93eeea99cea6c96d76a87b8b26332" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.518413 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxvnk\" (UniqueName: \"kubernetes.io/projected/d8ed94c4-8122-4ea4-8c07-47beb5960274-kube-api-access-xxvnk\") pod \"ovnkube-node-nh8kb\" (UID: \"d8ed94c4-8122-4ea4-8c07-47beb5960274\") " pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.520113 4860 scope.go:117] "RemoveContainer" containerID="6278ee80c2f515945508573055f5f5e2bae2fbf20797432877279fa543905415" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.523601 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzw2c"] Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.541430 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzw2c"] Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.552804 4860 scope.go:117] "RemoveContainer" containerID="878b691dbb34e7e65d590bb127cef53a55a1bbc942bc4d8e6c57f9cab5c3a6ec" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.570696 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.604443 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.604538 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:43 crc kubenswrapper[4860]: I0121 21:21:43.617956 4860 scope.go:117] "RemoveContainer" containerID="f3598377993697f6bfe63af19c81a0893cdaad405e7dd392aed0f3964af55b3f" Jan 21 21:21:44 crc kubenswrapper[4860]: I0121 21:21:44.391033 4860 generic.go:334] "Generic (PLEG): container finished" podID="d8ed94c4-8122-4ea4-8c07-47beb5960274" containerID="4f0382cf47b15efc47c6b80fe845df89bdefdc6d5cfe283486281968ea919961" exitCode=0 Jan 21 21:21:44 crc kubenswrapper[4860]: I0121 21:21:44.391141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerDied","Data":"4f0382cf47b15efc47c6b80fe845df89bdefdc6d5cfe283486281968ea919961"} Jan 21 21:21:44 crc kubenswrapper[4860]: I0121 21:21:44.392545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"95faaa21c94e41cf0673d07a55a5c35d1ae5a2c7c5adc4cfa9c594f78a9a1ef5"} Jan 21 21:21:44 crc kubenswrapper[4860]: I0121 21:21:44.586736 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7976b0a1-a5f6-4aa6-86db-173e6342ff7f" path="/var/lib/kubelet/pods/7976b0a1-a5f6-4aa6-86db-173e6342ff7f/volumes" Jan 21 21:21:44 crc kubenswrapper[4860]: I0121 21:21:44.728851 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkf7m" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="registry-server" probeResult="failure" output=< Jan 21 21:21:44 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:21:44 crc kubenswrapper[4860]: > Jan 21 21:21:45 crc kubenswrapper[4860]: I0121 21:21:45.414816 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"13ce3e06256d7212d0fc9055a290f1732789dce819b41dff8bf5d45019b2b8c6"} Jan 21 21:21:45 crc kubenswrapper[4860]: I0121 21:21:45.415356 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"6563743ee2491122709746b75c55ced51fadcf5fbdc65b87b6fcaba4dd72afd2"} Jan 21 21:21:45 crc kubenswrapper[4860]: I0121 21:21:45.415371 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"8ab04c7f0d95c67d88cf50023eccc392972606a28441e3c3ad887ff1d9cd3d82"} Jan 21 21:21:45 crc kubenswrapper[4860]: I0121 21:21:45.415382 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"fe14640d6a230adf1752f44fae31e2ca563e2977010bc0bda116a8dee9474bdf"} Jan 21 21:21:45 crc kubenswrapper[4860]: I0121 21:21:45.415392 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"8dcc94a0316d4fc7147e5b9cb8b17e65c871935a0fc19c6c3395e1481c79400f"} Jan 21 21:21:46 crc kubenswrapper[4860]: I0121 21:21:46.423707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"b6828037eb899fd8e88aa7e5a5a2af377785150fe1a4e8e8c4bca2c75dd5660a"} Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.173719 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7"] Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.174616 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.177616 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.178587 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hcdbr" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.179340 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.280131 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r565r\" (UniqueName: \"kubernetes.io/projected/a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f-kube-api-access-r565r\") pod \"obo-prometheus-operator-68bc856cb9-q67c7\" (UID: \"a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.310170 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv"] Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.311080 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.318006 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-wcpp7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.320546 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.342017 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6"] Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.342917 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.381832 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r565r\" (UniqueName: \"kubernetes.io/projected/a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f-kube-api-access-r565r\") pod \"obo-prometheus-operator-68bc856cb9-q67c7\" (UID: \"a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.408158 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r565r\" (UniqueName: \"kubernetes.io/projected/a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f-kube-api-access-r565r\") pod \"obo-prometheus-operator-68bc856cb9-q67c7\" (UID: \"a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.482375 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t8zjn"] Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.483068 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.483813 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.484137 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.484211 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.486970 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.491599 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dfgnx" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.491993 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.494877 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.545664 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(55a3fcf4f777ecd55530190984c677eb605651b500ba795608f3cc0f923050b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.545771 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(55a3fcf4f777ecd55530190984c677eb605651b500ba795608f3cc0f923050b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.545807 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(55a3fcf4f777ecd55530190984c677eb605651b500ba795608f3cc0f923050b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.545866 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators(a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators(a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(55a3fcf4f777ecd55530190984c677eb605651b500ba795608f3cc0f923050b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" podUID="a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.585634 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.585687 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.585708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.585765 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.590227 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.591729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.592219 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1ce9223-1adf-48f8-a0bf-31ce28e5719f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6\" (UID: \"a1ce9223-1adf-48f8-a0bf-31ce28e5719f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.593721 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2f8b6ee-0b46-4492-ae99-aea050eed563-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv\" (UID: \"b2f8b6ee-0b46-4492-ae99-aea050eed563\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.628039 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.658376 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(2f221b30e87435e52b7eae7a31e7b16aad038b1d1c781fb993be913cc28f6ba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.658461 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(2f221b30e87435e52b7eae7a31e7b16aad038b1d1c781fb993be913cc28f6ba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.658494 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(2f221b30e87435e52b7eae7a31e7b16aad038b1d1c781fb993be913cc28f6ba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.658557 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators(b2f8b6ee-0b46-4492-ae99-aea050eed563)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators(b2f8b6ee-0b46-4492-ae99-aea050eed563)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(2f221b30e87435e52b7eae7a31e7b16aad038b1d1c781fb993be913cc28f6ba0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" podUID="b2f8b6ee-0b46-4492-ae99-aea050eed563" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.661424 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.689183 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gftl\" (UniqueName: \"kubernetes.io/projected/db3166f1-3c99-4217-859b-24835c6f1f1e-kube-api-access-6gftl\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.689361 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/db3166f1-3c99-4217-859b-24835c6f1f1e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.691718 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mv2g7"] Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.692874 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.697279 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6xdsf" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.706827 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(527d931cb3f8f1c3c6171c9cda33b8a124608eacc0645e5576a055dc63e9a055): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.706912 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(527d931cb3f8f1c3c6171c9cda33b8a124608eacc0645e5576a055dc63e9a055): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.707022 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(527d931cb3f8f1c3c6171c9cda33b8a124608eacc0645e5576a055dc63e9a055): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.707086 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators(a1ce9223-1adf-48f8-a0bf-31ce28e5719f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators(a1ce9223-1adf-48f8-a0bf-31ce28e5719f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(527d931cb3f8f1c3c6171c9cda33b8a124608eacc0645e5576a055dc63e9a055): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" podUID="a1ce9223-1adf-48f8-a0bf-31ce28e5719f" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.790201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gftl\" (UniqueName: \"kubernetes.io/projected/db3166f1-3c99-4217-859b-24835c6f1f1e-kube-api-access-6gftl\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.790485 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/db3166f1-3c99-4217-859b-24835c6f1f1e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.796021 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/db3166f1-3c99-4217-859b-24835c6f1f1e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.813102 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gftl\" (UniqueName: \"kubernetes.io/projected/db3166f1-3c99-4217-859b-24835c6f1f1e-kube-api-access-6gftl\") pod \"observability-operator-59bdc8b94-t8zjn\" (UID: \"db3166f1-3c99-4217-859b-24835c6f1f1e\") " pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.872734 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.891658 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsmb\" (UniqueName: \"kubernetes.io/projected/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-kube-api-access-xwsmb\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.892325 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.893447 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(30e814939896bec11f21f22f8958c119ab983f990ad4637415bcc517e5e027d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.893546 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(30e814939896bec11f21f22f8958c119ab983f990ad4637415bcc517e5e027d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.893580 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(30e814939896bec11f21f22f8958c119ab983f990ad4637415bcc517e5e027d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:47 crc kubenswrapper[4860]: E0121 21:21:47.893650 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-t8zjn_openshift-operators(db3166f1-3c99-4217-859b-24835c6f1f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-t8zjn_openshift-operators(db3166f1-3c99-4217-859b-24835c6f1f1e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(30e814939896bec11f21f22f8958c119ab983f990ad4637415bcc517e5e027d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" podUID="db3166f1-3c99-4217-859b-24835c6f1f1e" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.995443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.995651 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsmb\" (UniqueName: \"kubernetes.io/projected/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-kube-api-access-xwsmb\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:47 crc kubenswrapper[4860]: I0121 21:21:47.996547 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:48 crc kubenswrapper[4860]: I0121 21:21:48.013457 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsmb\" (UniqueName: \"kubernetes.io/projected/c5c4c6e9-c3e2-4b43-94a2-1918304ff52a-kube-api-access-xwsmb\") pod \"perses-operator-5bf474d74f-mv2g7\" (UID: \"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a\") " pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:48 crc kubenswrapper[4860]: I0121 21:21:48.014894 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:48 crc kubenswrapper[4860]: E0121 21:21:48.049009 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(d14e1c922549d1b387916081999d06307b7accf0a50730745fdca2f439eb8622): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:48 crc kubenswrapper[4860]: E0121 21:21:48.049097 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(d14e1c922549d1b387916081999d06307b7accf0a50730745fdca2f439eb8622): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:48 crc kubenswrapper[4860]: E0121 21:21:48.049136 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(d14e1c922549d1b387916081999d06307b7accf0a50730745fdca2f439eb8622): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:48 crc kubenswrapper[4860]: E0121 21:21:48.049186 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-mv2g7_openshift-operators(c5c4c6e9-c3e2-4b43-94a2-1918304ff52a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-mv2g7_openshift-operators(c5c4c6e9-c3e2-4b43-94a2-1918304ff52a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(d14e1c922549d1b387916081999d06307b7accf0a50730745fdca2f439eb8622): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" podUID="c5c4c6e9-c3e2-4b43-94a2-1918304ff52a" Jan 21 21:21:50 crc kubenswrapper[4860]: I0121 21:21:50.452270 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"fe0ed32470412690f07344b6aac03809356cb3d2b89dbbaf82d1d9756b93a6eb"} Jan 21 21:21:51 crc kubenswrapper[4860]: I0121 21:21:51.461498 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" event={"ID":"d8ed94c4-8122-4ea4-8c07-47beb5960274","Type":"ContainerStarted","Data":"8d2ac08b470486202f3b07f9892f85d03e4e1ab5c463c4c6323e7d3fd6320d7c"} Jan 21 21:21:51 crc kubenswrapper[4860]: I0121 21:21:51.461996 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:51 crc kubenswrapper[4860]: I0121 21:21:51.509068 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:51 crc kubenswrapper[4860]: I0121 21:21:51.516069 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" podStartSLOduration=8.516030989 podStartE2EDuration="8.516030989s" podCreationTimestamp="2026-01-21 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:21:51.512958724 +0000 UTC m=+803.735137204" watchObservedRunningTime="2026-01-21 21:21:51.516030989 +0000 UTC m=+803.738209459" Jan 21 21:21:52 crc kubenswrapper[4860]: I0121 21:21:52.468340 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:52 crc kubenswrapper[4860]: I0121 21:21:52.469388 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:52 crc kubenswrapper[4860]: I0121 21:21:52.504405 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.169000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6"] Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.169179 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.169689 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.172125 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7"] Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.172241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.172840 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.210001 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(f963249449dcff39f9bf071766b6971233fc9eda52a037ff6af3fb1f0adceb4d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.210651 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(f963249449dcff39f9bf071766b6971233fc9eda52a037ff6af3fb1f0adceb4d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.210715 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(f963249449dcff39f9bf071766b6971233fc9eda52a037ff6af3fb1f0adceb4d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.210788 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators(a1ce9223-1adf-48f8-a0bf-31ce28e5719f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators(a1ce9223-1adf-48f8-a0bf-31ce28e5719f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_openshift-operators_a1ce9223-1adf-48f8-a0bf-31ce28e5719f_0(f963249449dcff39f9bf071766b6971233fc9eda52a037ff6af3fb1f0adceb4d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" podUID="a1ce9223-1adf-48f8-a0bf-31ce28e5719f" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.227347 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t8zjn"] Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.227516 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.228111 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.239903 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(ebaf71b55c10c2e6eb1484256ba7d1fa6212b802c31a508b40adc0478f4b7fb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.239997 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(ebaf71b55c10c2e6eb1484256ba7d1fa6212b802c31a508b40adc0478f4b7fb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.240028 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(ebaf71b55c10c2e6eb1484256ba7d1fa6212b802c31a508b40adc0478f4b7fb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.240085 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators(a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators(a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q67c7_openshift-operators_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f_0(ebaf71b55c10c2e6eb1484256ba7d1fa6212b802c31a508b40adc0478f4b7fb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" podUID="a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.272374 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv"] Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.272537 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.273117 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.280000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mv2g7"] Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.280207 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.280808 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.309629 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(291324edc9834c49a723594932a77663339f27989defd140b4e08559ac85c88f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.309721 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(291324edc9834c49a723594932a77663339f27989defd140b4e08559ac85c88f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.309752 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(291324edc9834c49a723594932a77663339f27989defd140b4e08559ac85c88f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.309802 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-t8zjn_openshift-operators(db3166f1-3c99-4217-859b-24835c6f1f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-t8zjn_openshift-operators(db3166f1-3c99-4217-859b-24835c6f1f1e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t8zjn_openshift-operators_db3166f1-3c99-4217-859b-24835c6f1f1e_0(291324edc9834c49a723594932a77663339f27989defd140b4e08559ac85c88f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" podUID="db3166f1-3c99-4217-859b-24835c6f1f1e" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.330516 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(61d0a2877c31c9bc74685fd835c1ca6ca1400674e7eb80d226c58804eeb321c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.330607 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(61d0a2877c31c9bc74685fd835c1ca6ca1400674e7eb80d226c58804eeb321c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.330635 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(61d0a2877c31c9bc74685fd835c1ca6ca1400674e7eb80d226c58804eeb321c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.330706 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators(b2f8b6ee-0b46-4492-ae99-aea050eed563)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators(b2f8b6ee-0b46-4492-ae99-aea050eed563)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_openshift-operators_b2f8b6ee-0b46-4492-ae99-aea050eed563_0(61d0a2877c31c9bc74685fd835c1ca6ca1400674e7eb80d226c58804eeb321c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" podUID="b2f8b6ee-0b46-4492-ae99-aea050eed563" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.338080 4860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(038e75038a1f39aa231b72875533adda72cddf82d30eff880abc98820fa92451): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.338127 4860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(038e75038a1f39aa231b72875533adda72cddf82d30eff880abc98820fa92451): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.338144 4860 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(038e75038a1f39aa231b72875533adda72cddf82d30eff880abc98820fa92451): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:21:53 crc kubenswrapper[4860]: E0121 21:21:53.338187 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-mv2g7_openshift-operators(c5c4c6e9-c3e2-4b43-94a2-1918304ff52a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-mv2g7_openshift-operators(c5c4c6e9-c3e2-4b43-94a2-1918304ff52a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-mv2g7_openshift-operators_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a_0(038e75038a1f39aa231b72875533adda72cddf82d30eff880abc98820fa92451): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" podUID="c5c4c6e9-c3e2-4b43-94a2-1918304ff52a" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.694797 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:53 crc kubenswrapper[4860]: I0121 21:21:53.763431 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:54 crc kubenswrapper[4860]: I0121 21:21:54.265668 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:55 crc kubenswrapper[4860]: I0121 21:21:55.483008 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkf7m" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="registry-server" containerID="cri-o://95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af" gracePeriod=2 Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.374253 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.491521 4860 generic.go:334] "Generic (PLEG): container finished" podID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerID="95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af" exitCode=0 Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.491592 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerDied","Data":"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af"} Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.491609 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkf7m" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.491635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkf7m" event={"ID":"b5e05733-b570-4b0a-ba85-e08fef5b2f86","Type":"ContainerDied","Data":"45cf7c6b8ab79b25ba7ceb58a9a3ed1c8877a67a50b90e93c12d723c122059b5"} Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.491655 4860 scope.go:117] "RemoveContainer" containerID="95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.512012 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content\") pod \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.512084 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfcv\" (UniqueName: \"kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv\") pod \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.512220 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities\") pod \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\" (UID: \"b5e05733-b570-4b0a-ba85-e08fef5b2f86\") " Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.513136 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities" (OuterVolumeSpecName: "utilities") pod "b5e05733-b570-4b0a-ba85-e08fef5b2f86" (UID: "b5e05733-b570-4b0a-ba85-e08fef5b2f86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.516018 4860 scope.go:117] "RemoveContainer" containerID="2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.531594 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv" (OuterVolumeSpecName: "kube-api-access-hbfcv") pod "b5e05733-b570-4b0a-ba85-e08fef5b2f86" (UID: "b5e05733-b570-4b0a-ba85-e08fef5b2f86"). InnerVolumeSpecName "kube-api-access-hbfcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.547571 4860 scope.go:117] "RemoveContainer" containerID="72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.567597 4860 scope.go:117] "RemoveContainer" containerID="95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af" Jan 21 21:21:56 crc kubenswrapper[4860]: E0121 21:21:56.568556 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af\": container with ID starting with 95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af not found: ID does not exist" containerID="95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.568656 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af"} err="failed to get container status \"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af\": rpc error: code = NotFound desc = could not find container \"95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af\": container with ID starting with 95e3c2d1606f5ecbd7412b17c494390365d019e9781665ad1ec10cae63b720af not found: ID does not exist" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.568691 4860 scope.go:117] "RemoveContainer" containerID="2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6" Jan 21 21:21:56 crc kubenswrapper[4860]: E0121 21:21:56.569120 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6\": container with ID starting with 2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6 not found: ID does not exist" containerID="2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.569233 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6"} err="failed to get container status \"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6\": rpc error: code = NotFound desc = could not find container \"2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6\": container with ID starting with 2576969fb4c493f4a9fcc1de01df51b73cd85ad8c652acd65a11fe01638741a6 not found: ID does not exist" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.569317 4860 scope.go:117] "RemoveContainer" containerID="72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be" Jan 21 21:21:56 crc kubenswrapper[4860]: E0121 21:21:56.569617 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be\": container with ID starting with 72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be not found: ID does not exist" containerID="72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.569648 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be"} err="failed to get container status \"72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be\": rpc error: code = NotFound desc = could not find container \"72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be\": container with ID starting with 72c8495df868402e6bc7b989219fb04d44f6363725551f63faf3e295be5865be not found: ID does not exist" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.613835 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.614054 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfcv\" (UniqueName: \"kubernetes.io/projected/b5e05733-b570-4b0a-ba85-e08fef5b2f86-kube-api-access-hbfcv\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.633087 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5e05733-b570-4b0a-ba85-e08fef5b2f86" (UID: "b5e05733-b570-4b0a-ba85-e08fef5b2f86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.715773 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e05733-b570-4b0a-ba85-e08fef5b2f86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.824416 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:56 crc kubenswrapper[4860]: I0121 21:21:56.831644 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkf7m"] Jan 21 21:21:58 crc kubenswrapper[4860]: I0121 21:21:58.588756 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" path="/var/lib/kubelet/pods/b5e05733-b570-4b0a-ba85-e08fef5b2f86/volumes" Jan 21 21:22:05 crc kubenswrapper[4860]: I0121 21:22:05.585152 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:22:05 crc kubenswrapper[4860]: I0121 21:22:05.587268 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:22:05 crc kubenswrapper[4860]: I0121 21:22:05.962403 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mv2g7"] Jan 21 21:22:05 crc kubenswrapper[4860]: W0121 21:22:05.969924 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c4c6e9_c3e2_4b43_94a2_1918304ff52a.slice/crio-50ceb873dbc5e0e509dbb01cd24954c97f395c6eac6b22683fb247f114bd5f35 WatchSource:0}: Error finding container 50ceb873dbc5e0e509dbb01cd24954c97f395c6eac6b22683fb247f114bd5f35: Status 404 returned error can't find the container with id 50ceb873dbc5e0e509dbb01cd24954c97f395c6eac6b22683fb247f114bd5f35 Jan 21 21:22:06 crc kubenswrapper[4860]: I0121 21:22:06.568780 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" event={"ID":"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a","Type":"ContainerStarted","Data":"50ceb873dbc5e0e509dbb01cd24954c97f395c6eac6b22683fb247f114bd5f35"} Jan 21 21:22:06 crc kubenswrapper[4860]: I0121 21:22:06.581520 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:22:06 crc kubenswrapper[4860]: I0121 21:22:06.581998 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" Jan 21 21:22:06 crc kubenswrapper[4860]: I0121 21:22:06.952349 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6"] Jan 21 21:22:07 crc kubenswrapper[4860]: I0121 21:22:07.578550 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:22:07 crc kubenswrapper[4860]: I0121 21:22:07.579129 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" Jan 21 21:22:07 crc kubenswrapper[4860]: I0121 21:22:07.598641 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" event={"ID":"a1ce9223-1adf-48f8-a0bf-31ce28e5719f","Type":"ContainerStarted","Data":"9ad6332f19a1445927cbc7c3739ac12b23fee231ffefde9ecdedc1446d800df2"} Jan 21 21:22:07 crc kubenswrapper[4860]: I0121 21:22:07.942231 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7"] Jan 21 21:22:07 crc kubenswrapper[4860]: W0121 21:22:07.980541 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8923e74_d8ad_4a90_ba9f_f26f7c92ef4f.slice/crio-a839cbf77490dadd8ed3e1755d1cd061f75264c1b54dd79cfd2fe4ddb519b601 WatchSource:0}: Error finding container a839cbf77490dadd8ed3e1755d1cd061f75264c1b54dd79cfd2fe4ddb519b601: Status 404 returned error can't find the container with id a839cbf77490dadd8ed3e1755d1cd061f75264c1b54dd79cfd2fe4ddb519b601 Jan 21 21:22:08 crc kubenswrapper[4860]: I0121 21:22:08.580215 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:22:08 crc kubenswrapper[4860]: I0121 21:22:08.580363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:22:08 crc kubenswrapper[4860]: I0121 21:22:08.584311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:22:08 crc kubenswrapper[4860]: I0121 21:22:08.584854 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" Jan 21 21:22:08 crc kubenswrapper[4860]: I0121 21:22:08.610422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" event={"ID":"a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f","Type":"ContainerStarted","Data":"a839cbf77490dadd8ed3e1755d1cd061f75264c1b54dd79cfd2fe4ddb519b601"} Jan 21 21:22:09 crc kubenswrapper[4860]: I0121 21:22:09.046388 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t8zjn"] Jan 21 21:22:09 crc kubenswrapper[4860]: I0121 21:22:09.163259 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv"] Jan 21 21:22:09 crc kubenswrapper[4860]: W0121 21:22:09.172352 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f8b6ee_0b46_4492_ae99_aea050eed563.slice/crio-202a889369a988dac983cde3f668d2fde85191153d596918e0adc98910e96d00 WatchSource:0}: Error finding container 202a889369a988dac983cde3f668d2fde85191153d596918e0adc98910e96d00: Status 404 returned error can't find the container with id 202a889369a988dac983cde3f668d2fde85191153d596918e0adc98910e96d00 Jan 21 21:22:09 crc kubenswrapper[4860]: I0121 21:22:09.625911 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" event={"ID":"db3166f1-3c99-4217-859b-24835c6f1f1e","Type":"ContainerStarted","Data":"3a1c9dfeedd9935974a80dcfbc082b89fb357610deb9d4d7e3adc289e1a9e45f"} Jan 21 21:22:09 crc kubenswrapper[4860]: I0121 21:22:09.629391 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" event={"ID":"b2f8b6ee-0b46-4492-ae99-aea050eed563","Type":"ContainerStarted","Data":"202a889369a988dac983cde3f668d2fde85191153d596918e0adc98910e96d00"} Jan 21 21:22:13 crc kubenswrapper[4860]: I0121 21:22:13.609467 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nh8kb" Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.733155 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" event={"ID":"a1ce9223-1adf-48f8-a0bf-31ce28e5719f","Type":"ContainerStarted","Data":"5e4f531571bc86328bf0a96603ed1842d1c91b30f7b7a2ae3185bb4cd52a7a11"} Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.740477 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" event={"ID":"b2f8b6ee-0b46-4492-ae99-aea050eed563","Type":"ContainerStarted","Data":"5cca21cc703b71958bb14b3ed386dc915a83d08dbdd876966ae360a4dc2aa888"} Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.744973 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" event={"ID":"a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f","Type":"ContainerStarted","Data":"74f3ab9dff0a88162e5ad485537e4a5aa5aed70a36def805e2b8996cad589e8f"} Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.747818 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" event={"ID":"c5c4c6e9-c3e2-4b43-94a2-1918304ff52a","Type":"ContainerStarted","Data":"6cb9262b1852611d20f4067aa4aa6cb5604e0f845a2032360f222293af840aaa"} Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.748032 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.767105 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6" podStartSLOduration=20.809212824 podStartE2EDuration="28.767078505s" podCreationTimestamp="2026-01-21 21:21:47 +0000 UTC" firstStartedPulling="2026-01-21 21:22:06.959690075 +0000 UTC m=+819.181868555" lastFinishedPulling="2026-01-21 21:22:14.917555756 +0000 UTC m=+827.139734236" observedRunningTime="2026-01-21 21:22:15.763004997 +0000 UTC m=+827.985183477" watchObservedRunningTime="2026-01-21 21:22:15.767078505 +0000 UTC m=+827.989256975" Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.791674 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" podStartSLOduration=19.845656774 podStartE2EDuration="28.791651929s" podCreationTimestamp="2026-01-21 21:21:47 +0000 UTC" firstStartedPulling="2026-01-21 21:22:05.973140541 +0000 UTC m=+818.195319011" lastFinishedPulling="2026-01-21 21:22:14.919135696 +0000 UTC m=+827.141314166" observedRunningTime="2026-01-21 21:22:15.784585667 +0000 UTC m=+828.006764137" watchObservedRunningTime="2026-01-21 21:22:15.791651929 +0000 UTC m=+828.013830399" Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.814762 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q67c7" podStartSLOduration=21.883129497 podStartE2EDuration="28.814735427s" podCreationTimestamp="2026-01-21 21:21:47 +0000 UTC" firstStartedPulling="2026-01-21 21:22:07.985609776 +0000 UTC m=+820.207788246" lastFinishedPulling="2026-01-21 21:22:14.917215696 +0000 UTC m=+827.139394176" observedRunningTime="2026-01-21 21:22:15.804437363 +0000 UTC m=+828.026615843" watchObservedRunningTime="2026-01-21 21:22:15.814735427 +0000 UTC m=+828.036913897" Jan 21 21:22:15 crc kubenswrapper[4860]: I0121 21:22:15.826677 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv" podStartSLOduration=23.09450926 podStartE2EDuration="28.826654093s" podCreationTimestamp="2026-01-21 21:21:47 +0000 UTC" firstStartedPulling="2026-01-21 21:22:09.193807798 +0000 UTC m=+821.415986268" lastFinishedPulling="2026-01-21 21:22:14.925952631 +0000 UTC m=+827.148131101" observedRunningTime="2026-01-21 21:22:15.825789276 +0000 UTC m=+828.047967766" watchObservedRunningTime="2026-01-21 21:22:15.826654093 +0000 UTC m=+828.048832573" Jan 21 21:22:20 crc kubenswrapper[4860]: I0121 21:22:20.893680 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" event={"ID":"db3166f1-3c99-4217-859b-24835c6f1f1e","Type":"ContainerStarted","Data":"1aa33a3e67b1257165f93d0bd89f850587b4873ad71e77bc284325d5e03fa9e8"} Jan 21 21:22:20 crc kubenswrapper[4860]: I0121 21:22:20.894482 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:22:20 crc kubenswrapper[4860]: I0121 21:22:20.896044 4860 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-t8zjn container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" start-of-body= Jan 21 21:22:20 crc kubenswrapper[4860]: I0121 21:22:20.896097 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" podUID="db3166f1-3c99-4217-859b-24835c6f1f1e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" Jan 21 21:22:20 crc kubenswrapper[4860]: I0121 21:22:20.927452 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" podStartSLOduration=23.222046011 podStartE2EDuration="33.927430312s" podCreationTimestamp="2026-01-21 21:21:47 +0000 UTC" firstStartedPulling="2026-01-21 21:22:09.05238829 +0000 UTC m=+821.274566760" lastFinishedPulling="2026-01-21 21:22:19.757772591 +0000 UTC m=+831.979951061" observedRunningTime="2026-01-21 21:22:20.92482711 +0000 UTC m=+833.147005590" watchObservedRunningTime="2026-01-21 21:22:20.927430312 +0000 UTC m=+833.149608802" Jan 21 21:22:21 crc kubenswrapper[4860]: I0121 21:22:21.930423 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-t8zjn" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.018808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-mv2g7" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.728839 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs"] Jan 21 21:22:28 crc kubenswrapper[4860]: E0121 21:22:28.729459 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="registry-server" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.729487 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="registry-server" Jan 21 21:22:28 crc kubenswrapper[4860]: E0121 21:22:28.729517 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="extract-content" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.729525 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="extract-content" Jan 21 21:22:28 crc kubenswrapper[4860]: E0121 21:22:28.729533 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="extract-utilities" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.729539 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="extract-utilities" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.729703 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e05733-b570-4b0a-ba85-e08fef5b2f86" containerName="registry-server" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.730817 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.733857 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.764128 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs"] Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.868856 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.868965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b82vl\" (UniqueName: \"kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.869028 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.970582 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b82vl\" (UniqueName: \"kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.970653 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.970726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.971364 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.971473 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:28 crc kubenswrapper[4860]: I0121 21:22:28.999617 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b82vl\" (UniqueName: \"kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:29 crc kubenswrapper[4860]: I0121 21:22:29.071820 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:29 crc kubenswrapper[4860]: I0121 21:22:29.550828 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs"] Jan 21 21:22:29 crc kubenswrapper[4860]: W0121 21:22:29.570002 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a5257fe_4ae3_44ec_b045_524b3b95c81c.slice/crio-eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2 WatchSource:0}: Error finding container eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2: Status 404 returned error can't find the container with id eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2 Jan 21 21:22:30 crc kubenswrapper[4860]: I0121 21:22:30.165996 4860 generic.go:334] "Generic (PLEG): container finished" podID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerID="534635ccf2185baea82edaa32e52ac86abe4bcee2383666c6b67385e5786e573" exitCode=0 Jan 21 21:22:30 crc kubenswrapper[4860]: I0121 21:22:30.166396 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" event={"ID":"1a5257fe-4ae3-44ec-b045-524b3b95c81c","Type":"ContainerDied","Data":"534635ccf2185baea82edaa32e52ac86abe4bcee2383666c6b67385e5786e573"} Jan 21 21:22:30 crc kubenswrapper[4860]: I0121 21:22:30.166455 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" event={"ID":"1a5257fe-4ae3-44ec-b045-524b3b95c81c","Type":"ContainerStarted","Data":"eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2"} Jan 21 21:22:32 crc kubenswrapper[4860]: I0121 21:22:32.184659 4860 generic.go:334] "Generic (PLEG): container finished" podID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerID="d7fab7b41a0ca8a0b0e0769095b1b59eeb364be71897c70efebc2c3e3fc2ddb5" exitCode=0 Jan 21 21:22:32 crc kubenswrapper[4860]: I0121 21:22:32.184735 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" event={"ID":"1a5257fe-4ae3-44ec-b045-524b3b95c81c","Type":"ContainerDied","Data":"d7fab7b41a0ca8a0b0e0769095b1b59eeb364be71897c70efebc2c3e3fc2ddb5"} Jan 21 21:22:33 crc kubenswrapper[4860]: I0121 21:22:33.197500 4860 generic.go:334] "Generic (PLEG): container finished" podID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerID="3b986a6641dd5a5ef25e839a29f3c1253ae2aceff0de7ef3555d2823fd9d3fb8" exitCode=0 Jan 21 21:22:33 crc kubenswrapper[4860]: I0121 21:22:33.197613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" event={"ID":"1a5257fe-4ae3-44ec-b045-524b3b95c81c","Type":"ContainerDied","Data":"3b986a6641dd5a5ef25e839a29f3c1253ae2aceff0de7ef3555d2823fd9d3fb8"} Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.554637 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.633474 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b82vl\" (UniqueName: \"kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl\") pod \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.633563 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util\") pod \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.633616 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle\") pod \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\" (UID: \"1a5257fe-4ae3-44ec-b045-524b3b95c81c\") " Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.634720 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle" (OuterVolumeSpecName: "bundle") pod "1a5257fe-4ae3-44ec-b045-524b3b95c81c" (UID: "1a5257fe-4ae3-44ec-b045-524b3b95c81c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.642240 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl" (OuterVolumeSpecName: "kube-api-access-b82vl") pod "1a5257fe-4ae3-44ec-b045-524b3b95c81c" (UID: "1a5257fe-4ae3-44ec-b045-524b3b95c81c"). InnerVolumeSpecName "kube-api-access-b82vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.659250 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util" (OuterVolumeSpecName: "util") pod "1a5257fe-4ae3-44ec-b045-524b3b95c81c" (UID: "1a5257fe-4ae3-44ec-b045-524b3b95c81c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.735710 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b82vl\" (UniqueName: \"kubernetes.io/projected/1a5257fe-4ae3-44ec-b045-524b3b95c81c-kube-api-access-b82vl\") on node \"crc\" DevicePath \"\"" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.735816 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:22:34 crc kubenswrapper[4860]: I0121 21:22:34.735849 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a5257fe-4ae3-44ec-b045-524b3b95c81c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:22:35 crc kubenswrapper[4860]: I0121 21:22:35.213126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" event={"ID":"1a5257fe-4ae3-44ec-b045-524b3b95c81c","Type":"ContainerDied","Data":"eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2"} Jan 21 21:22:35 crc kubenswrapper[4860]: I0121 21:22:35.213181 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca88c7ea9c8e7c2cfbee319149faf546982f7122bd7ba94401333a6f14da3f2" Jan 21 21:22:35 crc kubenswrapper[4860]: I0121 21:22:35.213221 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.544414 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tpllw"] Jan 21 21:22:40 crc kubenswrapper[4860]: E0121 21:22:40.545336 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="pull" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.545354 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="pull" Jan 21 21:22:40 crc kubenswrapper[4860]: E0121 21:22:40.545378 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="util" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.545386 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="util" Jan 21 21:22:40 crc kubenswrapper[4860]: E0121 21:22:40.545410 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="extract" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.545420 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="extract" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.545553 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5257fe-4ae3-44ec-b045-524b3b95c81c" containerName="extract" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.546136 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.549622 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.549824 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.550528 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jqdh5" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.557111 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tpllw"] Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.616275 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwqv\" (UniqueName: \"kubernetes.io/projected/5f9bf17c-9142-474a-8a94-7e8cc90702f0-kube-api-access-mgwqv\") pod \"nmstate-operator-646758c888-tpllw\" (UID: \"5f9bf17c-9142-474a-8a94-7e8cc90702f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.870202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwqv\" (UniqueName: \"kubernetes.io/projected/5f9bf17c-9142-474a-8a94-7e8cc90702f0-kube-api-access-mgwqv\") pod \"nmstate-operator-646758c888-tpllw\" (UID: \"5f9bf17c-9142-474a-8a94-7e8cc90702f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" Jan 21 21:22:40 crc kubenswrapper[4860]: I0121 21:22:40.898784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwqv\" (UniqueName: \"kubernetes.io/projected/5f9bf17c-9142-474a-8a94-7e8cc90702f0-kube-api-access-mgwqv\") pod \"nmstate-operator-646758c888-tpllw\" (UID: \"5f9bf17c-9142-474a-8a94-7e8cc90702f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" Jan 21 21:22:41 crc kubenswrapper[4860]: I0121 21:22:41.165631 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" Jan 21 21:22:41 crc kubenswrapper[4860]: I0121 21:22:41.454359 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tpllw"] Jan 21 21:22:42 crc kubenswrapper[4860]: I0121 21:22:42.257803 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" event={"ID":"5f9bf17c-9142-474a-8a94-7e8cc90702f0","Type":"ContainerStarted","Data":"2421904ae6f7bd5b47f6fc2a901ec927b4cd333d36898e41a5c10f7a344d2b82"} Jan 21 21:22:47 crc kubenswrapper[4860]: I0121 21:22:47.316598 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" event={"ID":"5f9bf17c-9142-474a-8a94-7e8cc90702f0","Type":"ContainerStarted","Data":"33f0e4a8e17d65bc7792552b01063f3781a8a00fda83d5d3241831fc978360c5"} Jan 21 21:22:47 crc kubenswrapper[4860]: I0121 21:22:47.343126 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tpllw" podStartSLOduration=2.253879546 podStartE2EDuration="7.343063042s" podCreationTimestamp="2026-01-21 21:22:40 +0000 UTC" firstStartedPulling="2026-01-21 21:22:41.470573719 +0000 UTC m=+853.692752189" lastFinishedPulling="2026-01-21 21:22:46.559757205 +0000 UTC m=+858.781935685" observedRunningTime="2026-01-21 21:22:47.337448179 +0000 UTC m=+859.559626649" watchObservedRunningTime="2026-01-21 21:22:47.343063042 +0000 UTC m=+859.565241522" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.041420 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ktn72"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.043209 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.045298 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nz2bf" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.054647 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.055507 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.058790 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.067112 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ktn72"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.071161 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.082111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkhv\" (UniqueName: \"kubernetes.io/projected/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-kube-api-access-cbkhv\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.082179 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.082243 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9fx\" (UniqueName: \"kubernetes.io/projected/8364952a-bcf3-49ae-b357-0521e9d6e04e-kube-api-access-sz9fx\") pod \"nmstate-metrics-54757c584b-ktn72\" (UID: \"8364952a-bcf3-49ae-b357-0521e9d6e04e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.123625 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-66jdw"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.124455 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.183874 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-nmstate-lock\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.183985 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9fx\" (UniqueName: \"kubernetes.io/projected/8364952a-bcf3-49ae-b357-0521e9d6e04e-kube-api-access-sz9fx\") pod \"nmstate-metrics-54757c584b-ktn72\" (UID: \"8364952a-bcf3-49ae-b357-0521e9d6e04e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.184033 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-ovs-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.184067 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkhv\" (UniqueName: \"kubernetes.io/projected/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-kube-api-access-cbkhv\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.184096 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-dbus-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.184156 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjr7c\" (UniqueName: \"kubernetes.io/projected/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-kube-api-access-jjr7c\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.184188 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: E0121 21:22:50.184365 4860 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 21:22:50 crc kubenswrapper[4860]: E0121 21:22:50.184469 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair podName:cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d nodeName:}" failed. No retries permitted until 2026-01-21 21:22:50.684431682 +0000 UTC m=+862.906610152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-wnc66" (UID: "cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d") : secret "openshift-nmstate-webhook" not found Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.207039 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkhv\" (UniqueName: \"kubernetes.io/projected/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-kube-api-access-cbkhv\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.207439 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9fx\" (UniqueName: \"kubernetes.io/projected/8364952a-bcf3-49ae-b357-0521e9d6e04e-kube-api-access-sz9fx\") pod \"nmstate-metrics-54757c584b-ktn72\" (UID: \"8364952a-bcf3-49ae-b357-0521e9d6e04e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.241692 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.249044 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.252403 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-htdxm" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.252791 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.253704 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.254246 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.284971 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-ovs-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285251 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-dbus-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285400 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285511 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjr7c\" (UniqueName: \"kubernetes.io/projected/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-kube-api-access-jjr7c\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285147 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-ovs-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-dbus-socket\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285685 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b6c5b0be-96f9-4141-a721-54ca98a89d93-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285851 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-nmstate-lock\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285924 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-nmstate-lock\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.285919 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4mc\" (UniqueName: \"kubernetes.io/projected/b6c5b0be-96f9-4141-a721-54ca98a89d93-kube-api-access-5f4mc\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.316769 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjr7c\" (UniqueName: \"kubernetes.io/projected/4ccac8fa-d2c8-4110-9bd4-78a6340612f9-kube-api-access-jjr7c\") pod \"nmstate-handler-66jdw\" (UID: \"4ccac8fa-d2c8-4110-9bd4-78a6340612f9\") " pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.374111 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.387319 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.387389 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b6c5b0be-96f9-4141-a721-54ca98a89d93-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.387430 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4mc\" (UniqueName: \"kubernetes.io/projected/b6c5b0be-96f9-4141-a721-54ca98a89d93-kube-api-access-5f4mc\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: E0121 21:22:50.387687 4860 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 21:22:50 crc kubenswrapper[4860]: E0121 21:22:50.387860 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert podName:b6c5b0be-96f9-4141-a721-54ca98a89d93 nodeName:}" failed. No retries permitted until 2026-01-21 21:22:50.887829998 +0000 UTC m=+863.110008468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-82rm8" (UID: "b6c5b0be-96f9-4141-a721-54ca98a89d93") : secret "plugin-serving-cert" not found Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.388777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b6c5b0be-96f9-4141-a721-54ca98a89d93-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.415717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4mc\" (UniqueName: \"kubernetes.io/projected/b6c5b0be-96f9-4141-a721-54ca98a89d93-kube-api-access-5f4mc\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.440349 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.459314 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.461745 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.478141 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488421 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488529 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjc7b\" (UniqueName: \"kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488575 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488596 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488637 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488737 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.488868 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591051 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591131 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjc7b\" (UniqueName: \"kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591245 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591301 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591340 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.591507 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.594952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.596588 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.600076 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.600859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.606944 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.607236 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.616739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjc7b\" (UniqueName: \"kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b\") pod \"console-6b5dd98db7-zplft\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.693077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.697588 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wnc66\" (UID: \"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.789024 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.896365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.900859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c5b0be-96f9-4141-a721-54ca98a89d93-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-82rm8\" (UID: \"b6c5b0be-96f9-4141-a721-54ca98a89d93\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:50 crc kubenswrapper[4860]: I0121 21:22:50.981900 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.055028 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ktn72"] Jan 21 21:22:51 crc kubenswrapper[4860]: W0121 21:22:51.062217 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8364952a_bcf3_49ae_b357_0521e9d6e04e.slice/crio-a8bf6eea61840dc918eee85589b3753c478fb06d2318b1c52e0b0798181c944e WatchSource:0}: Error finding container a8bf6eea61840dc918eee85589b3753c478fb06d2318b1c52e0b0798181c944e: Status 404 returned error can't find the container with id a8bf6eea61840dc918eee85589b3753c478fb06d2318b1c52e0b0798181c944e Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.167919 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.270588 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66"] Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.372276 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" event={"ID":"8364952a-bcf3-49ae-b357-0521e9d6e04e","Type":"ContainerStarted","Data":"a8bf6eea61840dc918eee85589b3753c478fb06d2318b1c52e0b0798181c944e"} Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.374979 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-66jdw" event={"ID":"4ccac8fa-d2c8-4110-9bd4-78a6340612f9","Type":"ContainerStarted","Data":"9a75599b9c7ade97aeb1659ad77a6c95c2fe72febf7eae7c534ce17456eb5d8d"} Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.389502 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:22:51 crc kubenswrapper[4860]: I0121 21:22:51.533643 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8"] Jan 21 21:22:52 crc kubenswrapper[4860]: I0121 21:22:52.393091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5dd98db7-zplft" event={"ID":"7882576f-1287-498d-9ed2-e06eef1a5212","Type":"ContainerStarted","Data":"26505744a70734aaa7e06e9beaae5268752e26ae9259cffa8ec5822412cff25b"} Jan 21 21:22:52 crc kubenswrapper[4860]: I0121 21:22:52.393209 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5dd98db7-zplft" event={"ID":"7882576f-1287-498d-9ed2-e06eef1a5212","Type":"ContainerStarted","Data":"f4c4841152463e2a91610cf33c56961adf0957f6dabd026cc97b79bf51e5d86e"} Jan 21 21:22:52 crc kubenswrapper[4860]: I0121 21:22:52.395579 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" event={"ID":"b6c5b0be-96f9-4141-a721-54ca98a89d93","Type":"ContainerStarted","Data":"aa26495d097a4908fabb7e738d694153201204ac8aadeb9ecd397320a9285e43"} Jan 21 21:22:52 crc kubenswrapper[4860]: I0121 21:22:52.396976 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" event={"ID":"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d","Type":"ContainerStarted","Data":"5a3fade0075439bacd9a181f26cbbd50936c1f979da28ea1972f60ca1ea7642b"} Jan 21 21:22:52 crc kubenswrapper[4860]: I0121 21:22:52.420002 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b5dd98db7-zplft" podStartSLOduration=2.419960878 podStartE2EDuration="2.419960878s" podCreationTimestamp="2026-01-21 21:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:22:52.414991745 +0000 UTC m=+864.637170225" watchObservedRunningTime="2026-01-21 21:22:52.419960878 +0000 UTC m=+864.642139348" Jan 21 21:22:54 crc kubenswrapper[4860]: I0121 21:22:54.411260 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" event={"ID":"cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d","Type":"ContainerStarted","Data":"33a11954987750bb8880ac8d4b93e7be5a7f6b9f2cd596099d511745a31654db"} Jan 21 21:22:54 crc kubenswrapper[4860]: I0121 21:22:54.411891 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:22:54 crc kubenswrapper[4860]: I0121 21:22:54.414177 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" event={"ID":"8364952a-bcf3-49ae-b357-0521e9d6e04e","Type":"ContainerStarted","Data":"60a717d352dbd8231fc8cc610a0e1386e398127bb9f4242347225c1b68ace6ed"} Jan 21 21:22:54 crc kubenswrapper[4860]: I0121 21:22:54.433828 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" podStartSLOduration=1.645561466 podStartE2EDuration="4.433807303s" podCreationTimestamp="2026-01-21 21:22:50 +0000 UTC" firstStartedPulling="2026-01-21 21:22:51.371093734 +0000 UTC m=+863.593272204" lastFinishedPulling="2026-01-21 21:22:54.159339571 +0000 UTC m=+866.381518041" observedRunningTime="2026-01-21 21:22:54.428151669 +0000 UTC m=+866.650330149" watchObservedRunningTime="2026-01-21 21:22:54.433807303 +0000 UTC m=+866.655985773" Jan 21 21:22:55 crc kubenswrapper[4860]: I0121 21:22:55.423586 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" event={"ID":"b6c5b0be-96f9-4141-a721-54ca98a89d93","Type":"ContainerStarted","Data":"7a230e3a9bd2acbf50ddc4166df1cb55f28bd21e8e846d14919e9f308b162d5d"} Jan 21 21:22:55 crc kubenswrapper[4860]: I0121 21:22:55.428117 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-66jdw" event={"ID":"4ccac8fa-d2c8-4110-9bd4-78a6340612f9","Type":"ContainerStarted","Data":"5f07791a6f686ca068118d15970ef1aeaf981881aeaa190ed183dce0b57c3754"} Jan 21 21:22:55 crc kubenswrapper[4860]: I0121 21:22:55.428350 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:22:55 crc kubenswrapper[4860]: I0121 21:22:55.441660 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-82rm8" podStartSLOduration=1.807454408 podStartE2EDuration="5.441633079s" podCreationTimestamp="2026-01-21 21:22:50 +0000 UTC" firstStartedPulling="2026-01-21 21:22:51.539337633 +0000 UTC m=+863.761516103" lastFinishedPulling="2026-01-21 21:22:55.173516304 +0000 UTC m=+867.395694774" observedRunningTime="2026-01-21 21:22:55.441265349 +0000 UTC m=+867.663443849" watchObservedRunningTime="2026-01-21 21:22:55.441633079 +0000 UTC m=+867.663811549" Jan 21 21:22:55 crc kubenswrapper[4860]: I0121 21:22:55.473394 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-66jdw" podStartSLOduration=1.808791551 podStartE2EDuration="5.4733667s" podCreationTimestamp="2026-01-21 21:22:50 +0000 UTC" firstStartedPulling="2026-01-21 21:22:50.527708301 +0000 UTC m=+862.749886771" lastFinishedPulling="2026-01-21 21:22:54.19228345 +0000 UTC m=+866.414461920" observedRunningTime="2026-01-21 21:22:55.469177681 +0000 UTC m=+867.691356181" watchObservedRunningTime="2026-01-21 21:22:55.4733667 +0000 UTC m=+867.695545180" Jan 21 21:22:57 crc kubenswrapper[4860]: I0121 21:22:57.444744 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" event={"ID":"8364952a-bcf3-49ae-b357-0521e9d6e04e","Type":"ContainerStarted","Data":"da05827a3a7708c224088683e8c1f13c8f553f6a75c440a98ce8c6766ca31488"} Jan 21 21:22:57 crc kubenswrapper[4860]: I0121 21:22:57.465827 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ktn72" podStartSLOduration=1.693611941 podStartE2EDuration="7.465803374s" podCreationTimestamp="2026-01-21 21:22:50 +0000 UTC" firstStartedPulling="2026-01-21 21:22:51.065346856 +0000 UTC m=+863.287525326" lastFinishedPulling="2026-01-21 21:22:56.837538289 +0000 UTC m=+869.059716759" observedRunningTime="2026-01-21 21:22:57.462196173 +0000 UTC m=+869.684374723" watchObservedRunningTime="2026-01-21 21:22:57.465803374 +0000 UTC m=+869.687981844" Jan 21 21:23:00 crc kubenswrapper[4860]: I0121 21:23:00.471713 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-66jdw" Jan 21 21:23:00 crc kubenswrapper[4860]: I0121 21:23:00.790180 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:23:00 crc kubenswrapper[4860]: I0121 21:23:00.790538 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:23:00 crc kubenswrapper[4860]: I0121 21:23:00.795101 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:23:01 crc kubenswrapper[4860]: I0121 21:23:01.523340 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:23:01 crc kubenswrapper[4860]: I0121 21:23:01.578195 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:23:10 crc kubenswrapper[4860]: I0121 21:23:10.990806 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wnc66" Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.616985 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hbh47" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" containerID="cri-o://927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e" gracePeriod=15 Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.930426 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd"] Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.932268 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.934835 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.944511 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd"] Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.965266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.965342 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sp5\" (UniqueName: \"kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:26 crc kubenswrapper[4860]: I0121 21:23:26.965398 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.066040 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.066109 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sp5\" (UniqueName: \"kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.066138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.066789 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.066863 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.086869 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sp5\" (UniqueName: \"kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.256392 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.479568 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hbh47_235af04d-ef1a-4328-a0c4-aa6d5bc04b92/console/0.log" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.479659 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.498995 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd"] Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677514 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677589 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677644 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677709 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677782 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677818 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.677854 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca\") pod \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\" (UID: \"235af04d-ef1a-4328-a0c4-aa6d5bc04b92\") " Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.679356 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca" (OuterVolumeSpecName: "service-ca") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.679578 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.679635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.680024 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config" (OuterVolumeSpecName: "console-config") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.684821 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.685123 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.685145 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2" (OuterVolumeSpecName: "kube-api-access-mj5z2") pod "235af04d-ef1a-4328-a0c4-aa6d5bc04b92" (UID: "235af04d-ef1a-4328-a0c4-aa6d5bc04b92"). InnerVolumeSpecName "kube-api-access-mj5z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713760 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hbh47_235af04d-ef1a-4328-a0c4-aa6d5bc04b92/console/0.log" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713834 4860 generic.go:334] "Generic (PLEG): container finished" podID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerID="927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e" exitCode=2 Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713903 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbh47" event={"ID":"235af04d-ef1a-4328-a0c4-aa6d5bc04b92","Type":"ContainerDied","Data":"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e"} Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713918 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbh47" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713954 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbh47" event={"ID":"235af04d-ef1a-4328-a0c4-aa6d5bc04b92","Type":"ContainerDied","Data":"1c8a1d2c227df1380fea2314a63e605a4df9c91e7f905cd0069c17b406a74b90"} Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.713994 4860 scope.go:117] "RemoveContainer" containerID="927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.715869 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerStarted","Data":"508314dad508fb40434ea7f5d1c222228ef4b3a339816e2eb82065100ab760c6"} Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.715902 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerStarted","Data":"0cd72da83196838cf4d41da8db98c1ef32e2bcb24e4b79000f3fd81b8bcdcf97"} Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.734242 4860 scope.go:117] "RemoveContainer" containerID="927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e" Jan 21 21:23:27 crc kubenswrapper[4860]: E0121 21:23:27.734909 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e\": container with ID starting with 927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e not found: ID does not exist" containerID="927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.734970 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e"} err="failed to get container status \"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e\": rpc error: code = NotFound desc = could not find container \"927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e\": container with ID starting with 927ed74e79c0e826c46351b2d6b803f45d6d12f8ef535a19d371928a282fbd5e not found: ID does not exist" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.774773 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780582 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780651 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780664 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-kube-api-access-mj5z2\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780673 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780680 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780689 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.780699 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/235af04d-ef1a-4328-a0c4-aa6d5bc04b92-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:27 crc kubenswrapper[4860]: I0121 21:23:27.781551 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hbh47"] Jan 21 21:23:28 crc kubenswrapper[4860]: I0121 21:23:28.598803 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" path="/var/lib/kubelet/pods/235af04d-ef1a-4328-a0c4-aa6d5bc04b92/volumes" Jan 21 21:23:28 crc kubenswrapper[4860]: I0121 21:23:28.726982 4860 generic.go:334] "Generic (PLEG): container finished" podID="5c161771-f442-4590-980e-3346fa015d48" containerID="508314dad508fb40434ea7f5d1c222228ef4b3a339816e2eb82065100ab760c6" exitCode=0 Jan 21 21:23:28 crc kubenswrapper[4860]: I0121 21:23:28.727062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerDied","Data":"508314dad508fb40434ea7f5d1c222228ef4b3a339816e2eb82065100ab760c6"} Jan 21 21:23:30 crc kubenswrapper[4860]: I0121 21:23:30.746532 4860 generic.go:334] "Generic (PLEG): container finished" podID="5c161771-f442-4590-980e-3346fa015d48" containerID="fc9009eeb7bdc4b19a342a227a6686bda88878d0cacd9b8da58c86fc8ffd0b1a" exitCode=0 Jan 21 21:23:30 crc kubenswrapper[4860]: I0121 21:23:30.746656 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerDied","Data":"fc9009eeb7bdc4b19a342a227a6686bda88878d0cacd9b8da58c86fc8ffd0b1a"} Jan 21 21:23:31 crc kubenswrapper[4860]: I0121 21:23:31.758255 4860 generic.go:334] "Generic (PLEG): container finished" podID="5c161771-f442-4590-980e-3346fa015d48" containerID="e4449614d23b9ccf5ca44438ef8dc2c16659a05fd35cd39fe2b3602d31eab8c8" exitCode=0 Jan 21 21:23:31 crc kubenswrapper[4860]: I0121 21:23:31.758335 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerDied","Data":"e4449614d23b9ccf5ca44438ef8dc2c16659a05fd35cd39fe2b3602d31eab8c8"} Jan 21 21:23:32 crc kubenswrapper[4860]: I0121 21:23:32.103818 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:23:32 crc kubenswrapper[4860]: I0121 21:23:32.104469 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.053037 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.105218 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sp5\" (UniqueName: \"kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5\") pod \"5c161771-f442-4590-980e-3346fa015d48\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.105369 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util\") pod \"5c161771-f442-4590-980e-3346fa015d48\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.105449 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle\") pod \"5c161771-f442-4590-980e-3346fa015d48\" (UID: \"5c161771-f442-4590-980e-3346fa015d48\") " Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.107373 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle" (OuterVolumeSpecName: "bundle") pod "5c161771-f442-4590-980e-3346fa015d48" (UID: "5c161771-f442-4590-980e-3346fa015d48"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.113026 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5" (OuterVolumeSpecName: "kube-api-access-n6sp5") pod "5c161771-f442-4590-980e-3346fa015d48" (UID: "5c161771-f442-4590-980e-3346fa015d48"). InnerVolumeSpecName "kube-api-access-n6sp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.208410 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.208490 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sp5\" (UniqueName: \"kubernetes.io/projected/5c161771-f442-4590-980e-3346fa015d48-kube-api-access-n6sp5\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.253533 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util" (OuterVolumeSpecName: "util") pod "5c161771-f442-4590-980e-3346fa015d48" (UID: "5c161771-f442-4590-980e-3346fa015d48"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.308899 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c161771-f442-4590-980e-3346fa015d48-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.774828 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.774763 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd" event={"ID":"5c161771-f442-4590-980e-3346fa015d48","Type":"ContainerDied","Data":"0cd72da83196838cf4d41da8db98c1ef32e2bcb24e4b79000f3fd81b8bcdcf97"} Jan 21 21:23:33 crc kubenswrapper[4860]: I0121 21:23:33.777091 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd72da83196838cf4d41da8db98c1ef32e2bcb24e4b79000f3fd81b8bcdcf97" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.714215 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88"] Jan 21 21:23:42 crc kubenswrapper[4860]: E0121 21:23:42.715317 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="extract" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715346 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="extract" Jan 21 21:23:42 crc kubenswrapper[4860]: E0121 21:23:42.715365 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715373 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" Jan 21 21:23:42 crc kubenswrapper[4860]: E0121 21:23:42.715391 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="util" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715399 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="util" Jan 21 21:23:42 crc kubenswrapper[4860]: E0121 21:23:42.715430 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="pull" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715438 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="pull" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715632 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c161771-f442-4590-980e-3346fa015d48" containerName="extract" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.715658 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="235af04d-ef1a-4328-a0c4-aa6d5bc04b92" containerName="console" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.716445 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.723562 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.726290 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.726546 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5hstf" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.726642 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.727094 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.753766 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88"] Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.913357 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-apiservice-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.913458 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-webhook-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:42 crc kubenswrapper[4860]: I0121 21:23:42.913561 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbnq\" (UniqueName: \"kubernetes.io/projected/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-kube-api-access-6mbnq\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.015000 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-apiservice-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.015093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-webhook-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.015172 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbnq\" (UniqueName: \"kubernetes.io/projected/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-kube-api-access-6mbnq\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.027476 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-apiservice-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.027779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-webhook-cert\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.039794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbnq\" (UniqueName: \"kubernetes.io/projected/c8584c36-7092-4bd3-b92e-5a3e8c16ec63-kube-api-access-6mbnq\") pod \"metallb-operator-controller-manager-5844d47cc5-cxs88\" (UID: \"c8584c36-7092-4bd3-b92e-5a3e8c16ec63\") " pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.043804 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.070891 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7"] Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.071895 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.078317 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.078594 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kb9c7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.078599 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.097289 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7"] Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.119182 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-webhook-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.119258 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfws\" (UniqueName: \"kubernetes.io/projected/f6d67ae0-be03-465f-bb51-ace581cc0bb8-kube-api-access-mpfws\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.119301 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-apiservice-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.333446 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-apiservice-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.333584 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-webhook-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.333615 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfws\" (UniqueName: \"kubernetes.io/projected/f6d67ae0-be03-465f-bb51-ace581cc0bb8-kube-api-access-mpfws\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.348882 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-apiservice-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.348908 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6d67ae0-be03-465f-bb51-ace581cc0bb8-webhook-cert\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.380228 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfws\" (UniqueName: \"kubernetes.io/projected/f6d67ae0-be03-465f-bb51-ace581cc0bb8-kube-api-access-mpfws\") pod \"metallb-operator-webhook-server-ccfb7bd9d-w49p7\" (UID: \"f6d67ae0-be03-465f-bb51-ace581cc0bb8\") " pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:43 crc kubenswrapper[4860]: I0121 21:23:43.637475 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:44 crc kubenswrapper[4860]: I0121 21:23:44.155312 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88"] Jan 21 21:23:44 crc kubenswrapper[4860]: I0121 21:23:44.373141 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7"] Jan 21 21:23:45 crc kubenswrapper[4860]: I0121 21:23:45.179126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" event={"ID":"f6d67ae0-be03-465f-bb51-ace581cc0bb8","Type":"ContainerStarted","Data":"f0e9debdb47bad9be5a8c3e59067a8ee12815760ebfc7b859adf47e06b7e56ee"} Jan 21 21:23:45 crc kubenswrapper[4860]: I0121 21:23:45.182846 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" event={"ID":"c8584c36-7092-4bd3-b92e-5a3e8c16ec63","Type":"ContainerStarted","Data":"346267be24951d89fe30092545c51d1ae1903e07a1838c571924ff9fef835ba8"} Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.428419 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" event={"ID":"f6d67ae0-be03-465f-bb51-ace581cc0bb8","Type":"ContainerStarted","Data":"e32b5023039fe91b945b02f740ab89aa7ca488a3b2da52f9ac7ca6b15f8f46c9"} Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.428894 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.436645 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" event={"ID":"c8584c36-7092-4bd3-b92e-5a3e8c16ec63","Type":"ContainerStarted","Data":"c42744a9a6e04ae20fad3cb9fe3538857ac5934663b5114886a85132d9e9d800"} Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.437573 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.453542 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" podStartSLOduration=1.660505578 podStartE2EDuration="9.45349297s" podCreationTimestamp="2026-01-21 21:23:43 +0000 UTC" firstStartedPulling="2026-01-21 21:23:44.381964638 +0000 UTC m=+916.604143108" lastFinishedPulling="2026-01-21 21:23:52.17495203 +0000 UTC m=+924.397130500" observedRunningTime="2026-01-21 21:23:52.45126081 +0000 UTC m=+924.673439280" watchObservedRunningTime="2026-01-21 21:23:52.45349297 +0000 UTC m=+924.675671440" Jan 21 21:23:52 crc kubenswrapper[4860]: I0121 21:23:52.485278 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" podStartSLOduration=2.516065116 podStartE2EDuration="10.485243652s" podCreationTimestamp="2026-01-21 21:23:42 +0000 UTC" firstStartedPulling="2026-01-21 21:23:44.185058298 +0000 UTC m=+916.407236768" lastFinishedPulling="2026-01-21 21:23:52.154236844 +0000 UTC m=+924.376415304" observedRunningTime="2026-01-21 21:23:52.477719826 +0000 UTC m=+924.699898306" watchObservedRunningTime="2026-01-21 21:23:52.485243652 +0000 UTC m=+924.707422122" Jan 21 21:24:02 crc kubenswrapper[4860]: I0121 21:24:02.104213 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:24:02 crc kubenswrapper[4860]: I0121 21:24:02.105343 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:24:03 crc kubenswrapper[4860]: I0121 21:24:03.643498 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-ccfb7bd9d-w49p7" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.047462 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5844d47cc5-cxs88" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.842578 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6m2js"] Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.846353 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.848287 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9xl5s" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.849897 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.858365 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls"] Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.859478 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.861116 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.864103 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.877016 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls"] Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.919742 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-reloader\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.919792 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/970afa92-8bd5-4351-80dd-ca87ad067409-metrics-certs\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.919832 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-metrics\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.919909 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-conf\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.919996 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvznr\" (UniqueName: \"kubernetes.io/projected/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-kube-api-access-cvznr\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.920051 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.920076 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmb9\" (UniqueName: \"kubernetes.io/projected/970afa92-8bd5-4351-80dd-ca87ad067409-kube-api-access-jvmb9\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.920101 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/970afa92-8bd5-4351-80dd-ca87ad067409-frr-startup\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.920142 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-sockets\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.947408 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5hvn2"] Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.948629 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5hvn2" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.954608 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.954905 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.955068 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 21:24:23 crc kubenswrapper[4860]: I0121 21:24:23.955298 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7xkvp" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-reloader\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076251 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/970afa92-8bd5-4351-80dd-ca87ad067409-metrics-certs\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076287 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-metrics\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076318 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-conf\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvznr\" (UniqueName: \"kubernetes.io/projected/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-kube-api-access-cvznr\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076412 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmb9\" (UniqueName: \"kubernetes.io/projected/970afa92-8bd5-4351-80dd-ca87ad067409-kube-api-access-jvmb9\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076433 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/970afa92-8bd5-4351-80dd-ca87ad067409-frr-startup\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.076483 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-sockets\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.077321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-sockets\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.077611 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-metrics\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.077841 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-frr-conf\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.077992 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-xd2ml"] Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.078321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/970afa92-8bd5-4351-80dd-ca87ad067409-reloader\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.078751 4860 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.078981 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert podName:e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e nodeName:}" failed. No retries permitted until 2026-01-21 21:24:24.578903939 +0000 UTC m=+956.801082409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-6vpls" (UID: "e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e") : secret "frr-k8s-webhook-server-cert" not found Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.080526 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.081634 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/970afa92-8bd5-4351-80dd-ca87ad067409-frr-startup\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.094836 4860 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.104397 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/970afa92-8bd5-4351-80dd-ca87ad067409-metrics-certs\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.106549 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xd2ml"] Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.130078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmb9\" (UniqueName: \"kubernetes.io/projected/970afa92-8bd5-4351-80dd-ca87ad067409-kube-api-access-jvmb9\") pod \"frr-k8s-6m2js\" (UID: \"970afa92-8bd5-4351-80dd-ca87ad067409\") " pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.130674 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvznr\" (UniqueName: \"kubernetes.io/projected/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-kube-api-access-cvznr\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.170522 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.177804 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.177911 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.177954 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65134009-4244-4384-91b7-057584cd6586-metallb-excludel2\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.178125 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nl67\" (UniqueName: \"kubernetes.io/projected/65134009-4244-4384-91b7-057584cd6586-kube-api-access-2nl67\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279720 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279802 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279826 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65134009-4244-4384-91b7-057584cd6586-metallb-excludel2\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qh2m\" (UniqueName: \"kubernetes.io/projected/c9335377-613f-4d57-8ad1-48dc561aaa28-kube-api-access-2qh2m\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nl67\" (UniqueName: \"kubernetes.io/projected/65134009-4244-4384-91b7-057584cd6586-kube-api-access-2nl67\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279910 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-metrics-certs\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.279965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-cert\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.280270 4860 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.280275 4860 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.280433 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs podName:65134009-4244-4384-91b7-057584cd6586 nodeName:}" failed. No retries permitted until 2026-01-21 21:24:24.780387871 +0000 UTC m=+957.002566501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs") pod "speaker-5hvn2" (UID: "65134009-4244-4384-91b7-057584cd6586") : secret "speaker-certs-secret" not found Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.280677 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist podName:65134009-4244-4384-91b7-057584cd6586 nodeName:}" failed. No retries permitted until 2026-01-21 21:24:24.78065279 +0000 UTC m=+957.002831260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist") pod "speaker-5hvn2" (UID: "65134009-4244-4384-91b7-057584cd6586") : secret "metallb-memberlist" not found Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.280868 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65134009-4244-4384-91b7-057584cd6586-metallb-excludel2\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.304668 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nl67\" (UniqueName: \"kubernetes.io/projected/65134009-4244-4384-91b7-057584cd6586-kube-api-access-2nl67\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.381215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-metrics-certs\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.381314 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-cert\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.381400 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qh2m\" (UniqueName: \"kubernetes.io/projected/c9335377-613f-4d57-8ad1-48dc561aaa28-kube-api-access-2qh2m\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.385902 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-metrics-certs\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.386566 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9335377-613f-4d57-8ad1-48dc561aaa28-cert\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.406384 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qh2m\" (UniqueName: \"kubernetes.io/projected/c9335377-613f-4d57-8ad1-48dc561aaa28-kube-api-access-2qh2m\") pod \"controller-6968d8fdc4-xd2ml\" (UID: \"c9335377-613f-4d57-8ad1-48dc561aaa28\") " pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.479775 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.584108 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.591066 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6vpls\" (UID: \"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.698068 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"f73b7cf021aa2bf39c03d4ab1ddde9772268dd62cb22ff63d1fe1b6a5c266c86"} Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.724833 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xd2ml"] Jan 21 21:24:24 crc kubenswrapper[4860]: W0121 21:24:24.732215 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9335377_613f_4d57_8ad1_48dc561aaa28.slice/crio-5ce55d24aefbdf0522c4e8a695e0b7ec2a4a3ee93b46ade16baec4bae113342c WatchSource:0}: Error finding container 5ce55d24aefbdf0522c4e8a695e0b7ec2a4a3ee93b46ade16baec4bae113342c: Status 404 returned error can't find the container with id 5ce55d24aefbdf0522c4e8a695e0b7ec2a4a3ee93b46ade16baec4bae113342c Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.789582 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.789650 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.789795 4860 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 21:24:24 crc kubenswrapper[4860]: E0121 21:24:24.789894 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist podName:65134009-4244-4384-91b7-057584cd6586 nodeName:}" failed. No retries permitted until 2026-01-21 21:24:25.789875224 +0000 UTC m=+958.012053694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist") pod "speaker-5hvn2" (UID: "65134009-4244-4384-91b7-057584cd6586") : secret "metallb-memberlist" not found Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.793433 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-metrics-certs\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:24 crc kubenswrapper[4860]: I0121 21:24:24.798577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.206247 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls"] Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.709774 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" event={"ID":"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e","Type":"ContainerStarted","Data":"13f8e3f45aef9d2e28f53a57501216727968a9ebd8c1b1e263c39cedd77ca357"} Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.713443 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xd2ml" event={"ID":"c9335377-613f-4d57-8ad1-48dc561aaa28","Type":"ContainerStarted","Data":"e9ee6d4065c0b8b3008f026564cf9834376e84c3d83036540e1ca569b420a60b"} Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.713513 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xd2ml" event={"ID":"c9335377-613f-4d57-8ad1-48dc561aaa28","Type":"ContainerStarted","Data":"d0b790fa4c4d35716e2be024315c8c4862b5957dd5d05c1776e1c047272c87be"} Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.713528 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xd2ml" event={"ID":"c9335377-613f-4d57-8ad1-48dc561aaa28","Type":"ContainerStarted","Data":"5ce55d24aefbdf0522c4e8a695e0b7ec2a4a3ee93b46ade16baec4bae113342c"} Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.713670 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.738609 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-xd2ml" podStartSLOduration=2.738555214 podStartE2EDuration="2.738555214s" podCreationTimestamp="2026-01-21 21:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:24:25.733470974 +0000 UTC m=+957.955649464" watchObservedRunningTime="2026-01-21 21:24:25.738555214 +0000 UTC m=+957.960733684" Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.841835 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.857139 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65134009-4244-4384-91b7-057584cd6586-memberlist\") pod \"speaker-5hvn2\" (UID: \"65134009-4244-4384-91b7-057584cd6586\") " pod="metallb-system/speaker-5hvn2" Jan 21 21:24:25 crc kubenswrapper[4860]: I0121 21:24:25.965126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5hvn2" Jan 21 21:24:26 crc kubenswrapper[4860]: I0121 21:24:26.733501 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5hvn2" event={"ID":"65134009-4244-4384-91b7-057584cd6586","Type":"ContainerStarted","Data":"18c5119c52c524e47bed54e3f3a95e571f278402010ac5933087ba1c7c634842"} Jan 21 21:24:26 crc kubenswrapper[4860]: I0121 21:24:26.733960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5hvn2" event={"ID":"65134009-4244-4384-91b7-057584cd6586","Type":"ContainerStarted","Data":"35f4186e86a44b8e0c3b3fb219f446661ecf9c3cd3c1f6e0c97732f7a3675e80"} Jan 21 21:24:27 crc kubenswrapper[4860]: I0121 21:24:27.746023 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5hvn2" event={"ID":"65134009-4244-4384-91b7-057584cd6586","Type":"ContainerStarted","Data":"a8aad2cbe48bad7815606deb43ae56bc09e75c77d85f8a7140ee7d85d4e3ef4b"} Jan 21 21:24:27 crc kubenswrapper[4860]: I0121 21:24:27.746163 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5hvn2" Jan 21 21:24:27 crc kubenswrapper[4860]: I0121 21:24:27.768311 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5hvn2" podStartSLOduration=4.7682877569999995 podStartE2EDuration="4.768287757s" podCreationTimestamp="2026-01-21 21:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:24:27.765746737 +0000 UTC m=+959.987925227" watchObservedRunningTime="2026-01-21 21:24:27.768287757 +0000 UTC m=+959.990466237" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.103520 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.104193 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.104256 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.105037 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.105126 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e" gracePeriod=600 Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.763001 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.766965 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.793189 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.805315 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e" exitCode=0 Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.805376 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e"} Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.805440 4860 scope.go:117] "RemoveContainer" containerID="96db8aeabde9598ee6245e662c986810c9f7612477589d8508dbf6ba2ca4f34f" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.901782 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.901854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:32 crc kubenswrapper[4860]: I0121 21:24:32.901891 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5bw\" (UniqueName: \"kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.003260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.003333 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.003362 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5bw\" (UniqueName: \"kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.004005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.004005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.026496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5bw\" (UniqueName: \"kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw\") pod \"redhat-marketplace-tqgf8\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:33 crc kubenswrapper[4860]: I0121 21:24:33.096894 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.113697 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.486074 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-xd2ml" Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.832763 4860 generic.go:334] "Generic (PLEG): container finished" podID="970afa92-8bd5-4351-80dd-ca87ad067409" containerID="347375f8a506c3be68a564b6f3a6e79945f9da8233dc1915b7f02aa7de6b28b7" exitCode=0 Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.833367 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerDied","Data":"347375f8a506c3be68a564b6f3a6e79945f9da8233dc1915b7f02aa7de6b28b7"} Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.835610 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" event={"ID":"e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e","Type":"ContainerStarted","Data":"4d7d05f968db95a1a3d6ff961a871c60d6ad0746e5e2474d80999bdb919d1bfb"} Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.835739 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.838423 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887"} Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.849617 4860 generic.go:334] "Generic (PLEG): container finished" podID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerID="0e3273e96749ecfd8e052acefb95928c46a8c602d35e1d6e3f20b34a92125d6a" exitCode=0 Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.849685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerDied","Data":"0e3273e96749ecfd8e052acefb95928c46a8c602d35e1d6e3f20b34a92125d6a"} Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.849746 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerStarted","Data":"28a1848ac7f984c4d193fdb65ba1ab8daaac6ad8facd335c4ed08948b92d2c10"} Jan 21 21:24:34 crc kubenswrapper[4860]: I0121 21:24:34.900837 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" podStartSLOduration=3.375682653 podStartE2EDuration="11.90079108s" podCreationTimestamp="2026-01-21 21:24:23 +0000 UTC" firstStartedPulling="2026-01-21 21:24:25.218283904 +0000 UTC m=+957.440462364" lastFinishedPulling="2026-01-21 21:24:33.743392321 +0000 UTC m=+965.965570791" observedRunningTime="2026-01-21 21:24:34.893883544 +0000 UTC m=+967.116062084" watchObservedRunningTime="2026-01-21 21:24:34.90079108 +0000 UTC m=+967.122969560" Jan 21 21:24:35 crc kubenswrapper[4860]: I0121 21:24:35.879122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerStarted","Data":"b4158c6a20d88850a0bdefa3ddea5a6991d03fb9d4a55e9c149ebfefa2764ae6"} Jan 21 21:24:35 crc kubenswrapper[4860]: I0121 21:24:35.887306 4860 generic.go:334] "Generic (PLEG): container finished" podID="970afa92-8bd5-4351-80dd-ca87ad067409" containerID="07e7dcef5154f9109b8235284f2dcd48cf9d66945f2e02cb2a23b30ad7a6f655" exitCode=0 Jan 21 21:24:35 crc kubenswrapper[4860]: I0121 21:24:35.887567 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerDied","Data":"07e7dcef5154f9109b8235284f2dcd48cf9d66945f2e02cb2a23b30ad7a6f655"} Jan 21 21:24:36 crc kubenswrapper[4860]: I0121 21:24:36.900381 4860 generic.go:334] "Generic (PLEG): container finished" podID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerID="b4158c6a20d88850a0bdefa3ddea5a6991d03fb9d4a55e9c149ebfefa2764ae6" exitCode=0 Jan 21 21:24:36 crc kubenswrapper[4860]: I0121 21:24:36.900544 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerDied","Data":"b4158c6a20d88850a0bdefa3ddea5a6991d03fb9d4a55e9c149ebfefa2764ae6"} Jan 21 21:24:36 crc kubenswrapper[4860]: I0121 21:24:36.907606 4860 generic.go:334] "Generic (PLEG): container finished" podID="970afa92-8bd5-4351-80dd-ca87ad067409" containerID="2215e1c1bda165afed2ec7fb07baaa19936f9c9e277792fb36f58f478d549081" exitCode=0 Jan 21 21:24:36 crc kubenswrapper[4860]: I0121 21:24:36.907697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerDied","Data":"2215e1c1bda165afed2ec7fb07baaa19936f9c9e277792fb36f58f478d549081"} Jan 21 21:24:37 crc kubenswrapper[4860]: I0121 21:24:37.932827 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerStarted","Data":"79c3c83afa8ba8dbbde8206ce9acc7e748f2c13b5bc977f64e341750a73bae0d"} Jan 21 21:24:37 crc kubenswrapper[4860]: I0121 21:24:37.943345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"2df0e74054b73c7a3889c416a6807c707a8961b14fd999c785a8e3fd85edfa9d"} Jan 21 21:24:37 crc kubenswrapper[4860]: I0121 21:24:37.943391 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"e960988631577f69604df225f1aeaa5f344615a7f20b4ede946e17c076738184"} Jan 21 21:24:37 crc kubenswrapper[4860]: I0121 21:24:37.943401 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"1917a9d6931683e210d1f00560d89a18191519c10eda067e1846dc7b89a2a6b0"} Jan 21 21:24:37 crc kubenswrapper[4860]: I0121 21:24:37.943411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"9f44b857bca95e5091c0ec57876015e219ad8713d8da8d2d0d6db068be49649c"} Jan 21 21:24:38 crc kubenswrapper[4860]: I0121 21:24:38.958739 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"4a7113c548a0ea68b25b38f52992a1b9f4253f66b387c05d6bac8e7d6a3e534a"} Jan 21 21:24:38 crc kubenswrapper[4860]: I0121 21:24:38.959156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6m2js" event={"ID":"970afa92-8bd5-4351-80dd-ca87ad067409","Type":"ContainerStarted","Data":"fae0532386f6d6a452750a48497e1e8445cc6c0be19bbdf4b1cfa576070c008a"} Jan 21 21:24:38 crc kubenswrapper[4860]: I0121 21:24:38.959208 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:38 crc kubenswrapper[4860]: I0121 21:24:38.987180 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6m2js" podStartSLOduration=6.646444514 podStartE2EDuration="15.987153825s" podCreationTimestamp="2026-01-21 21:24:23 +0000 UTC" firstStartedPulling="2026-01-21 21:24:24.392595606 +0000 UTC m=+956.614774096" lastFinishedPulling="2026-01-21 21:24:33.733304937 +0000 UTC m=+965.955483407" observedRunningTime="2026-01-21 21:24:38.982149469 +0000 UTC m=+971.204327969" watchObservedRunningTime="2026-01-21 21:24:38.987153825 +0000 UTC m=+971.209332295" Jan 21 21:24:38 crc kubenswrapper[4860]: I0121 21:24:38.988064 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tqgf8" podStartSLOduration=4.396074731 podStartE2EDuration="6.988049634s" podCreationTimestamp="2026-01-21 21:24:32 +0000 UTC" firstStartedPulling="2026-01-21 21:24:34.852058527 +0000 UTC m=+967.074236997" lastFinishedPulling="2026-01-21 21:24:37.44403343 +0000 UTC m=+969.666211900" observedRunningTime="2026-01-21 21:24:37.976339785 +0000 UTC m=+970.198518305" watchObservedRunningTime="2026-01-21 21:24:38.988049634 +0000 UTC m=+971.210228114" Jan 21 21:24:39 crc kubenswrapper[4860]: I0121 21:24:39.171384 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:39 crc kubenswrapper[4860]: I0121 21:24:39.228918 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:43 crc kubenswrapper[4860]: I0121 21:24:43.097427 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:43 crc kubenswrapper[4860]: I0121 21:24:43.098199 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:43 crc kubenswrapper[4860]: I0121 21:24:43.174615 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:44 crc kubenswrapper[4860]: I0121 21:24:44.162554 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:44 crc kubenswrapper[4860]: I0121 21:24:44.221759 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:44 crc kubenswrapper[4860]: I0121 21:24:44.803624 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6vpls" Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.903298 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.906436 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.937101 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.941074 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.941178 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.941316 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqm5c\" (UniqueName: \"kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:45 crc kubenswrapper[4860]: I0121 21:24:45.970598 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5hvn2" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.043045 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.043166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.043280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqm5c\" (UniqueName: \"kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.044966 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.045400 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.077198 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqm5c\" (UniqueName: \"kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c\") pod \"certified-operators-lrbtk\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.154541 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tqgf8" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="registry-server" containerID="cri-o://79c3c83afa8ba8dbbde8206ce9acc7e748f2c13b5bc977f64e341750a73bae0d" gracePeriod=2 Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.241839 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:46 crc kubenswrapper[4860]: I0121 21:24:46.552325 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.166846 4860 generic.go:334] "Generic (PLEG): container finished" podID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerID="79c3c83afa8ba8dbbde8206ce9acc7e748f2c13b5bc977f64e341750a73bae0d" exitCode=0 Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.166962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerDied","Data":"79c3c83afa8ba8dbbde8206ce9acc7e748f2c13b5bc977f64e341750a73bae0d"} Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.169294 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerID="009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b" exitCode=0 Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.169358 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerDied","Data":"009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b"} Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.169398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerStarted","Data":"aeef49820cb926a504a8bcbdfd0ac4e1e2cb52915dc1b63fffe00442dc6c390b"} Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.778179 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.892075 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj"] Jan 21 21:24:47 crc kubenswrapper[4860]: E0121 21:24:47.892634 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="extract-content" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.892669 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="extract-content" Jan 21 21:24:47 crc kubenswrapper[4860]: E0121 21:24:47.892691 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="registry-server" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.892701 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="registry-server" Jan 21 21:24:47 crc kubenswrapper[4860]: E0121 21:24:47.892716 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="extract-utilities" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.892728 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="extract-utilities" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.892972 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" containerName="registry-server" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.894508 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.897226 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.898634 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj"] Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.947110 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content\") pod \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.947762 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities\") pod \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.948300 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq5bw\" (UniqueName: \"kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw\") pod \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\" (UID: \"29a70e03-d609-4cf2-b549-64bf0cb0cf2b\") " Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.948448 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzxj\" (UniqueName: \"kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.948512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.948614 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.951089 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities" (OuterVolumeSpecName: "utilities") pod "29a70e03-d609-4cf2-b549-64bf0cb0cf2b" (UID: "29a70e03-d609-4cf2-b549-64bf0cb0cf2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.964023 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw" (OuterVolumeSpecName: "kube-api-access-nq5bw") pod "29a70e03-d609-4cf2-b549-64bf0cb0cf2b" (UID: "29a70e03-d609-4cf2-b549-64bf0cb0cf2b"). InnerVolumeSpecName "kube-api-access-nq5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:24:47 crc kubenswrapper[4860]: I0121 21:24:47.987922 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a70e03-d609-4cf2-b549-64bf0cb0cf2b" (UID: "29a70e03-d609-4cf2-b549-64bf0cb0cf2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.049981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khzxj\" (UniqueName: \"kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050113 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050161 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050173 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050184 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq5bw\" (UniqueName: \"kubernetes.io/projected/29a70e03-d609-4cf2-b549-64bf0cb0cf2b-kube-api-access-nq5bw\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050524 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.050587 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.075166 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khzxj\" (UniqueName: \"kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.178973 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqgf8" event={"ID":"29a70e03-d609-4cf2-b549-64bf0cb0cf2b","Type":"ContainerDied","Data":"28a1848ac7f984c4d193fdb65ba1ab8daaac6ad8facd335c4ed08948b92d2c10"} Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.179044 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqgf8" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.179051 4860 scope.go:117] "RemoveContainer" containerID="79c3c83afa8ba8dbbde8206ce9acc7e748f2c13b5bc977f64e341750a73bae0d" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.204920 4860 scope.go:117] "RemoveContainer" containerID="b4158c6a20d88850a0bdefa3ddea5a6991d03fb9d4a55e9c149ebfefa2764ae6" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.214035 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.214904 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.222427 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqgf8"] Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.240861 4860 scope.go:117] "RemoveContainer" containerID="0e3273e96749ecfd8e052acefb95928c46a8c602d35e1d6e3f20b34a92125d6a" Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.497851 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj"] Jan 21 21:24:48 crc kubenswrapper[4860]: I0121 21:24:48.598273 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a70e03-d609-4cf2-b549-64bf0cb0cf2b" path="/var/lib/kubelet/pods/29a70e03-d609-4cf2-b549-64bf0cb0cf2b/volumes" Jan 21 21:24:49 crc kubenswrapper[4860]: I0121 21:24:49.189708 4860 generic.go:334] "Generic (PLEG): container finished" podID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerID="bfde36e7d70b66186b1382950ca69ea6d0538ab48d6222537df13c53d2341be0" exitCode=0 Jan 21 21:24:49 crc kubenswrapper[4860]: I0121 21:24:49.190149 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" event={"ID":"64076d63-918c-4b94-9dae-a1ce4cd5b254","Type":"ContainerDied","Data":"bfde36e7d70b66186b1382950ca69ea6d0538ab48d6222537df13c53d2341be0"} Jan 21 21:24:49 crc kubenswrapper[4860]: I0121 21:24:49.190214 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" event={"ID":"64076d63-918c-4b94-9dae-a1ce4cd5b254","Type":"ContainerStarted","Data":"3135fe0884be1a04a22c63fede4365337a6df34365a105917bb72d0482610ae0"} Jan 21 21:24:49 crc kubenswrapper[4860]: I0121 21:24:49.194522 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerID="8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d" exitCode=0 Jan 21 21:24:49 crc kubenswrapper[4860]: I0121 21:24:49.194618 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerDied","Data":"8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d"} Jan 21 21:24:51 crc kubenswrapper[4860]: I0121 21:24:51.229222 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerStarted","Data":"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035"} Jan 21 21:24:51 crc kubenswrapper[4860]: I0121 21:24:51.282802 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrbtk" podStartSLOduration=3.809479397 podStartE2EDuration="6.282766826s" podCreationTimestamp="2026-01-21 21:24:45 +0000 UTC" firstStartedPulling="2026-01-21 21:24:47.17107819 +0000 UTC m=+979.393256660" lastFinishedPulling="2026-01-21 21:24:49.644365619 +0000 UTC m=+981.866544089" observedRunningTime="2026-01-21 21:24:51.276296854 +0000 UTC m=+983.498475334" watchObservedRunningTime="2026-01-21 21:24:51.282766826 +0000 UTC m=+983.504945296" Jan 21 21:24:54 crc kubenswrapper[4860]: I0121 21:24:54.174111 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6m2js" Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.243043 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.244158 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.304435 4860 generic.go:334] "Generic (PLEG): container finished" podID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerID="41b612d2ced37f128603e66e84d6a0bb8b0d518ae817120271ea1ec754a26b2a" exitCode=0 Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.304472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" event={"ID":"64076d63-918c-4b94-9dae-a1ce4cd5b254","Type":"ContainerDied","Data":"41b612d2ced37f128603e66e84d6a0bb8b0d518ae817120271ea1ec754a26b2a"} Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.361499 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:56 crc kubenswrapper[4860]: I0121 21:24:56.420230 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:57 crc kubenswrapper[4860]: I0121 21:24:57.322830 4860 generic.go:334] "Generic (PLEG): container finished" podID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerID="e6b2406a583dab6d52361b5de4937fd450c7c39a94d0a120f38aab6a8943c637" exitCode=0 Jan 21 21:24:57 crc kubenswrapper[4860]: I0121 21:24:57.323015 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" event={"ID":"64076d63-918c-4b94-9dae-a1ce4cd5b254","Type":"ContainerDied","Data":"e6b2406a583dab6d52361b5de4937fd450c7c39a94d0a120f38aab6a8943c637"} Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.604329 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.771521 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khzxj\" (UniqueName: \"kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj\") pod \"64076d63-918c-4b94-9dae-a1ce4cd5b254\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.771682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util\") pod \"64076d63-918c-4b94-9dae-a1ce4cd5b254\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.771744 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle\") pod \"64076d63-918c-4b94-9dae-a1ce4cd5b254\" (UID: \"64076d63-918c-4b94-9dae-a1ce4cd5b254\") " Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.773007 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle" (OuterVolumeSpecName: "bundle") pod "64076d63-918c-4b94-9dae-a1ce4cd5b254" (UID: "64076d63-918c-4b94-9dae-a1ce4cd5b254"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.781443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util" (OuterVolumeSpecName: "util") pod "64076d63-918c-4b94-9dae-a1ce4cd5b254" (UID: "64076d63-918c-4b94-9dae-a1ce4cd5b254"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.782004 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj" (OuterVolumeSpecName: "kube-api-access-khzxj") pod "64076d63-918c-4b94-9dae-a1ce4cd5b254" (UID: "64076d63-918c-4b94-9dae-a1ce4cd5b254"). InnerVolumeSpecName "kube-api-access-khzxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.873378 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khzxj\" (UniqueName: \"kubernetes.io/projected/64076d63-918c-4b94-9dae-a1ce4cd5b254-kube-api-access-khzxj\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.873416 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:58 crc kubenswrapper[4860]: I0121 21:24:58.873428 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64076d63-918c-4b94-9dae-a1ce4cd5b254-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.342262 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" event={"ID":"64076d63-918c-4b94-9dae-a1ce4cd5b254","Type":"ContainerDied","Data":"3135fe0884be1a04a22c63fede4365337a6df34365a105917bb72d0482610ae0"} Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.342319 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3135fe0884be1a04a22c63fede4365337a6df34365a105917bb72d0482610ae0" Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.342365 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj" Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.420569 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.420909 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrbtk" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="registry-server" containerID="cri-o://c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035" gracePeriod=2 Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.827590 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.890337 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities\") pod \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.890460 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqm5c\" (UniqueName: \"kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c\") pod \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.890510 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content\") pod \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\" (UID: \"0ec2f9d2-d9b9-4dca-b147-f076a4104748\") " Jan 21 21:24:59 crc kubenswrapper[4860]: I0121 21:24:59.891996 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities" (OuterVolumeSpecName: "utilities") pod "0ec2f9d2-d9b9-4dca-b147-f076a4104748" (UID: "0ec2f9d2-d9b9-4dca-b147-f076a4104748"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.026036 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c" (OuterVolumeSpecName: "kube-api-access-rqm5c") pod "0ec2f9d2-d9b9-4dca-b147-f076a4104748" (UID: "0ec2f9d2-d9b9-4dca-b147-f076a4104748"). InnerVolumeSpecName "kube-api-access-rqm5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.031330 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.031462 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqm5c\" (UniqueName: \"kubernetes.io/projected/0ec2f9d2-d9b9-4dca-b147-f076a4104748-kube-api-access-rqm5c\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.051548 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec2f9d2-d9b9-4dca-b147-f076a4104748" (UID: "0ec2f9d2-d9b9-4dca-b147-f076a4104748"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.132759 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec2f9d2-d9b9-4dca-b147-f076a4104748-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.356639 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerID="c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035" exitCode=0 Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.356832 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrbtk" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.356819 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerDied","Data":"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035"} Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.357035 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrbtk" event={"ID":"0ec2f9d2-d9b9-4dca-b147-f076a4104748","Type":"ContainerDied","Data":"aeef49820cb926a504a8bcbdfd0ac4e1e2cb52915dc1b63fffe00442dc6c390b"} Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.357143 4860 scope.go:117] "RemoveContainer" containerID="c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.384762 4860 scope.go:117] "RemoveContainer" containerID="8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.396320 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.401807 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrbtk"] Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.411147 4860 scope.go:117] "RemoveContainer" containerID="009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.436196 4860 scope.go:117] "RemoveContainer" containerID="c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035" Jan 21 21:25:00 crc kubenswrapper[4860]: E0121 21:25:00.436888 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035\": container with ID starting with c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035 not found: ID does not exist" containerID="c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.436967 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035"} err="failed to get container status \"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035\": rpc error: code = NotFound desc = could not find container \"c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035\": container with ID starting with c0ad06002edf4b54ced563b8ffd1dae3f0c04ed7f130c09815f83da84e2e2035 not found: ID does not exist" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.437030 4860 scope.go:117] "RemoveContainer" containerID="8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d" Jan 21 21:25:00 crc kubenswrapper[4860]: E0121 21:25:00.437305 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d\": container with ID starting with 8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d not found: ID does not exist" containerID="8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.437334 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d"} err="failed to get container status \"8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d\": rpc error: code = NotFound desc = could not find container \"8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d\": container with ID starting with 8e1f16056392317792857f9531837319baba7afe8e4cfbeee6029606ef7a2d2d not found: ID does not exist" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.437354 4860 scope.go:117] "RemoveContainer" containerID="009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b" Jan 21 21:25:00 crc kubenswrapper[4860]: E0121 21:25:00.437699 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b\": container with ID starting with 009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b not found: ID does not exist" containerID="009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.437723 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b"} err="failed to get container status \"009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b\": rpc error: code = NotFound desc = could not find container \"009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b\": container with ID starting with 009a73ec2d11f6dfdb80be559626bef0503bcf9718c832c46014f949f077728b not found: ID does not exist" Jan 21 21:25:00 crc kubenswrapper[4860]: I0121 21:25:00.588377 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" path="/var/lib/kubelet/pods/0ec2f9d2-d9b9-4dca-b147-f076a4104748/volumes" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.956983 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744"] Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958089 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="pull" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958117 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="pull" Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958146 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="extract-utilities" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="extract-utilities" Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958166 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="registry-server" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958175 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="registry-server" Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958196 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="extract-content" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958204 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="extract-content" Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958229 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="util" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958237 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="util" Jan 21 21:25:03 crc kubenswrapper[4860]: E0121 21:25:03.958247 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="extract" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958254 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="extract" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958448 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec2f9d2-d9b9-4dca-b147-f076a4104748" containerName="registry-server" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.958477 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="64076d63-918c-4b94-9dae-a1ce4cd5b254" containerName="extract" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.959309 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.970135 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.970511 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-xbn2h" Jan 21 21:25:03 crc kubenswrapper[4860]: I0121 21:25:03.970800 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.037239 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744"] Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.043530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df1008fa-122a-4546-b8be-1d80ef20f8c2-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.043681 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgm2f\" (UniqueName: \"kubernetes.io/projected/df1008fa-122a-4546-b8be-1d80ef20f8c2-kube-api-access-vgm2f\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.145173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgm2f\" (UniqueName: \"kubernetes.io/projected/df1008fa-122a-4546-b8be-1d80ef20f8c2-kube-api-access-vgm2f\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.145264 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df1008fa-122a-4546-b8be-1d80ef20f8c2-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.145883 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df1008fa-122a-4546-b8be-1d80ef20f8c2-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.186350 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgm2f\" (UniqueName: \"kubernetes.io/projected/df1008fa-122a-4546-b8be-1d80ef20f8c2-kube-api-access-vgm2f\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pf744\" (UID: \"df1008fa-122a-4546-b8be-1d80ef20f8c2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.279130 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" Jan 21 21:25:04 crc kubenswrapper[4860]: I0121 21:25:04.566091 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744"] Jan 21 21:25:05 crc kubenswrapper[4860]: I0121 21:25:05.407458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" event={"ID":"df1008fa-122a-4546-b8be-1d80ef20f8c2","Type":"ContainerStarted","Data":"70c3c85e3011802e6ea16aad4952fe0289693548ee731e0bf843bc923ef7a327"} Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.042348 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.044605 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.064773 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.479338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.479452 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.479529 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb85\" (UniqueName: \"kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.580105 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.580215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb85\" (UniqueName: \"kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.580241 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.580643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.580862 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.608156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb85\" (UniqueName: \"kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85\") pod \"community-operators-6w5g5\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:09 crc kubenswrapper[4860]: I0121 21:25:09.677282 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.148472 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.838305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" event={"ID":"df1008fa-122a-4546-b8be-1d80ef20f8c2","Type":"ContainerStarted","Data":"9b264632cdb28fcf57055aa6566437d37bfe3a4be3dbba731cbc949ce87ca7d8"} Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.840572 4860 generic.go:334] "Generic (PLEG): container finished" podID="285d2fd2-a877-4616-af45-1311a938580a" containerID="eac1b0cb5e54d5af3d2b7a7c45690ec76340eda29e472c26c65ca2cf4663dbba" exitCode=0 Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.840708 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerDied","Data":"eac1b0cb5e54d5af3d2b7a7c45690ec76340eda29e472c26c65ca2cf4663dbba"} Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.840797 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerStarted","Data":"48af5decc283c9fa04a40a790d3b1910039b65a9ca359fc831063070b2705832"} Jan 21 21:25:17 crc kubenswrapper[4860]: I0121 21:25:17.858756 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pf744" podStartSLOduration=2.68605652 podStartE2EDuration="14.858691516s" podCreationTimestamp="2026-01-21 21:25:03 +0000 UTC" firstStartedPulling="2026-01-21 21:25:04.575255933 +0000 UTC m=+996.797434403" lastFinishedPulling="2026-01-21 21:25:16.747890929 +0000 UTC m=+1008.970069399" observedRunningTime="2026-01-21 21:25:17.857981334 +0000 UTC m=+1010.080159814" watchObservedRunningTime="2026-01-21 21:25:17.858691516 +0000 UTC m=+1010.080870016" Jan 21 21:25:19 crc kubenswrapper[4860]: I0121 21:25:19.963490 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerStarted","Data":"ed86def566bc29d1b49af25f6f75dc55b51d49dae8f80d8d56fddedcad2c53d9"} Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.278965 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zvf7j"] Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.280366 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.284647 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-txsr9" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.284848 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.285053 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.285116 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.285192 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftlq\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-kube-api-access-kftlq\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.293626 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zvf7j"] Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.470754 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.470823 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kftlq\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-kube-api-access-kftlq\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.492037 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kftlq\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-kube-api-access-kftlq\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.494626 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5889d6e2-f3dc-4189-a782-cf0ad4db5e55-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zvf7j\" (UID: \"5889d6e2-f3dc-4189-a782-cf0ad4db5e55\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:20 crc kubenswrapper[4860]: I0121 21:25:20.604921 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:21 crc kubenswrapper[4860]: I0121 21:25:21.799949 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zvf7j"] Jan 21 21:25:21 crc kubenswrapper[4860]: I0121 21:25:21.999222 4860 generic.go:334] "Generic (PLEG): container finished" podID="285d2fd2-a877-4616-af45-1311a938580a" containerID="ed86def566bc29d1b49af25f6f75dc55b51d49dae8f80d8d56fddedcad2c53d9" exitCode=0 Jan 21 21:25:21 crc kubenswrapper[4860]: I0121 21:25:21.999299 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerDied","Data":"ed86def566bc29d1b49af25f6f75dc55b51d49dae8f80d8d56fddedcad2c53d9"} Jan 21 21:25:22 crc kubenswrapper[4860]: I0121 21:25:22.008043 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" event={"ID":"5889d6e2-f3dc-4189-a782-cf0ad4db5e55","Type":"ContainerStarted","Data":"9fdaa6bf3bde2fe666cc7c109d983547ea67d10940a42e615eebe2b8c56409a9"} Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.119166 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j"] Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.121739 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.139654 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerStarted","Data":"b9e3d949f1ef493325db1ae4fcb3b4357c0372fae034a2ec2e002518563a24df"} Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.151063 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5f24h" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.166920 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j"] Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.188567 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.188726 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrz2\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-kube-api-access-dqrz2\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.289590 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrz2\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-kube-api-access-dqrz2\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.290042 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.311292 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.311840 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrz2\" (UniqueName: \"kubernetes.io/projected/fa444955-5bc4-4188-9b3e-80b24e9e6cb4-kube-api-access-dqrz2\") pod \"cert-manager-cainjector-855d9ccff4-m5v7j\" (UID: \"fa444955-5bc4-4188-9b3e-80b24e9e6cb4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.489467 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.947842 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6w5g5" podStartSLOduration=11.039788182 podStartE2EDuration="15.947804489s" podCreationTimestamp="2026-01-21 21:25:09 +0000 UTC" firstStartedPulling="2026-01-21 21:25:17.842434525 +0000 UTC m=+1010.064612995" lastFinishedPulling="2026-01-21 21:25:22.750450832 +0000 UTC m=+1014.972629302" observedRunningTime="2026-01-21 21:25:24.213595286 +0000 UTC m=+1016.435773776" watchObservedRunningTime="2026-01-21 21:25:24.947804489 +0000 UTC m=+1017.169982959" Jan 21 21:25:24 crc kubenswrapper[4860]: I0121 21:25:24.955205 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j"] Jan 21 21:25:25 crc kubenswrapper[4860]: I0121 21:25:25.155212 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" event={"ID":"fa444955-5bc4-4188-9b3e-80b24e9e6cb4","Type":"ContainerStarted","Data":"beca0e70102fe3e6fbcd69dd79647a7b83ad973bb1e8aec6e4521ec735385188"} Jan 21 21:25:29 crc kubenswrapper[4860]: I0121 21:25:29.682193 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:29 crc kubenswrapper[4860]: I0121 21:25:29.682739 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:29 crc kubenswrapper[4860]: I0121 21:25:29.792765 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:30 crc kubenswrapper[4860]: I0121 21:25:30.414897 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:30 crc kubenswrapper[4860]: I0121 21:25:30.478907 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:32 crc kubenswrapper[4860]: I0121 21:25:32.242919 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6w5g5" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="registry-server" containerID="cri-o://b9e3d949f1ef493325db1ae4fcb3b4357c0372fae034a2ec2e002518563a24df" gracePeriod=2 Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.241158 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-wzmgt"] Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.242566 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.246552 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xkxcz" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.254337 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-wzmgt"] Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.259519 4860 generic.go:334] "Generic (PLEG): container finished" podID="285d2fd2-a877-4616-af45-1311a938580a" containerID="b9e3d949f1ef493325db1ae4fcb3b4357c0372fae034a2ec2e002518563a24df" exitCode=0 Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.259591 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerDied","Data":"b9e3d949f1ef493325db1ae4fcb3b4357c0372fae034a2ec2e002518563a24df"} Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.359198 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs8bt\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-kube-api-access-zs8bt\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.359292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-bound-sa-token\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.461453 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-bound-sa-token\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.461605 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs8bt\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-kube-api-access-zs8bt\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.484162 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs8bt\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-kube-api-access-zs8bt\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.484805 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20199873-120c-483b-b74e-6d501fdb151a-bound-sa-token\") pod \"cert-manager-86cb77c54b-wzmgt\" (UID: \"20199873-120c-483b-b74e-6d501fdb151a\") " pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:33 crc kubenswrapper[4860]: I0121 21:25:33.578377 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-wzmgt" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.018468 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.192911 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities\") pod \"285d2fd2-a877-4616-af45-1311a938580a\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.193112 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content\") pod \"285d2fd2-a877-4616-af45-1311a938580a\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.193151 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crb85\" (UniqueName: \"kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85\") pod \"285d2fd2-a877-4616-af45-1311a938580a\" (UID: \"285d2fd2-a877-4616-af45-1311a938580a\") " Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.194203 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities" (OuterVolumeSpecName: "utilities") pod "285d2fd2-a877-4616-af45-1311a938580a" (UID: "285d2fd2-a877-4616-af45-1311a938580a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.199487 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85" (OuterVolumeSpecName: "kube-api-access-crb85") pod "285d2fd2-a877-4616-af45-1311a938580a" (UID: "285d2fd2-a877-4616-af45-1311a938580a"). InnerVolumeSpecName "kube-api-access-crb85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.243588 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "285d2fd2-a877-4616-af45-1311a938580a" (UID: "285d2fd2-a877-4616-af45-1311a938580a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.391118 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.391503 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crb85\" (UniqueName: \"kubernetes.io/projected/285d2fd2-a877-4616-af45-1311a938580a-kube-api-access-crb85\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.391592 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285d2fd2-a877-4616-af45-1311a938580a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.396976 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" event={"ID":"fa444955-5bc4-4188-9b3e-80b24e9e6cb4","Type":"ContainerStarted","Data":"0d613034d4b0b926df7af8e57f8bd047e5a8bf5ec1682d74c4b6c1a664653a5d"} Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.400208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" event={"ID":"5889d6e2-f3dc-4189-a782-cf0ad4db5e55","Type":"ContainerStarted","Data":"0331954ae0db626ce692b56c95613b6a5c30c124f596b4023a663e3719a573a1"} Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.400547 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.412170 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w5g5" event={"ID":"285d2fd2-a877-4616-af45-1311a938580a","Type":"ContainerDied","Data":"48af5decc283c9fa04a40a790d3b1910039b65a9ca359fc831063070b2705832"} Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.412252 4860 scope.go:117] "RemoveContainer" containerID="b9e3d949f1ef493325db1ae4fcb3b4357c0372fae034a2ec2e002518563a24df" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.412447 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w5g5" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.418975 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-m5v7j" podStartSLOduration=2.463684971 podStartE2EDuration="12.418921385s" podCreationTimestamp="2026-01-21 21:25:23 +0000 UTC" firstStartedPulling="2026-01-21 21:25:24.979742523 +0000 UTC m=+1017.201921003" lastFinishedPulling="2026-01-21 21:25:34.934978957 +0000 UTC m=+1027.157157417" observedRunningTime="2026-01-21 21:25:35.417352655 +0000 UTC m=+1027.639531135" watchObservedRunningTime="2026-01-21 21:25:35.418921385 +0000 UTC m=+1027.641099855" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.442603 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-wzmgt"] Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.462539 4860 scope.go:117] "RemoveContainer" containerID="ed86def566bc29d1b49af25f6f75dc55b51d49dae8f80d8d56fddedcad2c53d9" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.477709 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" podStartSLOduration=2.495250817 podStartE2EDuration="15.477688431s" podCreationTimestamp="2026-01-21 21:25:20 +0000 UTC" firstStartedPulling="2026-01-21 21:25:21.91362194 +0000 UTC m=+1014.135800410" lastFinishedPulling="2026-01-21 21:25:34.896059554 +0000 UTC m=+1027.118238024" observedRunningTime="2026-01-21 21:25:35.455735172 +0000 UTC m=+1027.677913652" watchObservedRunningTime="2026-01-21 21:25:35.477688431 +0000 UTC m=+1027.699866911" Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.481425 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.497844 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6w5g5"] Jan 21 21:25:35 crc kubenswrapper[4860]: I0121 21:25:35.501019 4860 scope.go:117] "RemoveContainer" containerID="eac1b0cb5e54d5af3d2b7a7c45690ec76340eda29e472c26c65ca2cf4663dbba" Jan 21 21:25:36 crc kubenswrapper[4860]: I0121 21:25:36.422340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-wzmgt" event={"ID":"20199873-120c-483b-b74e-6d501fdb151a","Type":"ContainerStarted","Data":"2d51ebfdd40560c01afcf67df1e855425885c808ec453aba221eb0f8d6beae96"} Jan 21 21:25:36 crc kubenswrapper[4860]: I0121 21:25:36.423683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-wzmgt" event={"ID":"20199873-120c-483b-b74e-6d501fdb151a","Type":"ContainerStarted","Data":"3553135b5a12f658ebdb42fc414e1b57bb8454497b984e7132b79ff70ef18afc"} Jan 21 21:25:36 crc kubenswrapper[4860]: I0121 21:25:36.452974 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-wzmgt" podStartSLOduration=3.452925929 podStartE2EDuration="3.452925929s" podCreationTimestamp="2026-01-21 21:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:25:36.450281265 +0000 UTC m=+1028.672459755" watchObservedRunningTime="2026-01-21 21:25:36.452925929 +0000 UTC m=+1028.675104399" Jan 21 21:25:36 crc kubenswrapper[4860]: I0121 21:25:36.586953 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285d2fd2-a877-4616-af45-1311a938580a" path="/var/lib/kubelet/pods/285d2fd2-a877-4616-af45-1311a938580a/volumes" Jan 21 21:25:40 crc kubenswrapper[4860]: I0121 21:25:40.609090 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-zvf7j" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.080568 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:45 crc kubenswrapper[4860]: E0121 21:25:45.081845 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="registry-server" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.081897 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="registry-server" Jan 21 21:25:45 crc kubenswrapper[4860]: E0121 21:25:45.081916 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="extract-utilities" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.081923 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="extract-utilities" Jan 21 21:25:45 crc kubenswrapper[4860]: E0121 21:25:45.081963 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="extract-content" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.081972 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="extract-content" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.082173 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="285d2fd2-a877-4616-af45-1311a938580a" containerName="registry-server" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.082843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.085109 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-f5l4z" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.085479 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.085635 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.118832 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qc8\" (UniqueName: \"kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8\") pod \"openstack-operator-index-djq4t\" (UID: \"8413b898-c56f-4880-b823-455ea883379f\") " pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.156295 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.220160 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qc8\" (UniqueName: \"kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8\") pod \"openstack-operator-index-djq4t\" (UID: \"8413b898-c56f-4880-b823-455ea883379f\") " pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.239579 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qc8\" (UniqueName: \"kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8\") pod \"openstack-operator-index-djq4t\" (UID: \"8413b898-c56f-4880-b823-455ea883379f\") " pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.419895 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:45 crc kubenswrapper[4860]: I0121 21:25:45.850764 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:46 crc kubenswrapper[4860]: I0121 21:25:46.566913 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djq4t" event={"ID":"8413b898-c56f-4880-b823-455ea883379f","Type":"ContainerStarted","Data":"64c24d7263883c227582cbaf046a5f86ad86f97de046c330f9dcd2188917bfef"} Jan 21 21:25:48 crc kubenswrapper[4860]: I0121 21:25:48.453319 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.065450 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bhnr9"] Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.066663 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.083926 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bhnr9"] Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.204524 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfbr\" (UniqueName: \"kubernetes.io/projected/f4f99b18-596f-4e28-8941-0b83f1cf57e5-kube-api-access-vhfbr\") pod \"openstack-operator-index-bhnr9\" (UID: \"f4f99b18-596f-4e28-8941-0b83f1cf57e5\") " pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.306134 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhfbr\" (UniqueName: \"kubernetes.io/projected/f4f99b18-596f-4e28-8941-0b83f1cf57e5-kube-api-access-vhfbr\") pod \"openstack-operator-index-bhnr9\" (UID: \"f4f99b18-596f-4e28-8941-0b83f1cf57e5\") " pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.329904 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhfbr\" (UniqueName: \"kubernetes.io/projected/f4f99b18-596f-4e28-8941-0b83f1cf57e5-kube-api-access-vhfbr\") pod \"openstack-operator-index-bhnr9\" (UID: \"f4f99b18-596f-4e28-8941-0b83f1cf57e5\") " pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.401971 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.607683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djq4t" event={"ID":"8413b898-c56f-4880-b823-455ea883379f","Type":"ContainerStarted","Data":"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21"} Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.607881 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-djq4t" podUID="8413b898-c56f-4880-b823-455ea883379f" containerName="registry-server" containerID="cri-o://ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21" gracePeriod=2 Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.631462 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bhnr9"] Jan 21 21:25:49 crc kubenswrapper[4860]: W0121 21:25:49.724397 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f99b18_596f_4e28_8941_0b83f1cf57e5.slice/crio-cfac5045ed94da489ab3f33215cf0ee12f082f858eb67292fe63f141a50e8906 WatchSource:0}: Error finding container cfac5045ed94da489ab3f33215cf0ee12f082f858eb67292fe63f141a50e8906: Status 404 returned error can't find the container with id cfac5045ed94da489ab3f33215cf0ee12f082f858eb67292fe63f141a50e8906 Jan 21 21:25:49 crc kubenswrapper[4860]: I0121 21:25:49.950172 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.121077 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4qc8\" (UniqueName: \"kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8\") pod \"8413b898-c56f-4880-b823-455ea883379f\" (UID: \"8413b898-c56f-4880-b823-455ea883379f\") " Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.128360 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8" (OuterVolumeSpecName: "kube-api-access-d4qc8") pod "8413b898-c56f-4880-b823-455ea883379f" (UID: "8413b898-c56f-4880-b823-455ea883379f"). InnerVolumeSpecName "kube-api-access-d4qc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.222812 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4qc8\" (UniqueName: \"kubernetes.io/projected/8413b898-c56f-4880-b823-455ea883379f-kube-api-access-d4qc8\") on node \"crc\" DevicePath \"\"" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.616682 4860 generic.go:334] "Generic (PLEG): container finished" podID="8413b898-c56f-4880-b823-455ea883379f" containerID="ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21" exitCode=0 Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.617060 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-djq4t" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.616912 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djq4t" event={"ID":"8413b898-c56f-4880-b823-455ea883379f","Type":"ContainerDied","Data":"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21"} Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.617146 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-djq4t" event={"ID":"8413b898-c56f-4880-b823-455ea883379f","Type":"ContainerDied","Data":"64c24d7263883c227582cbaf046a5f86ad86f97de046c330f9dcd2188917bfef"} Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.617185 4860 scope.go:117] "RemoveContainer" containerID="ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.621350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bhnr9" event={"ID":"f4f99b18-596f-4e28-8941-0b83f1cf57e5","Type":"ContainerStarted","Data":"dbbc36c5c28f1ad19f54e8e031e007bae7e026ffedc95be8f6a9000f92f8b31e"} Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.621391 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bhnr9" event={"ID":"f4f99b18-596f-4e28-8941-0b83f1cf57e5","Type":"ContainerStarted","Data":"cfac5045ed94da489ab3f33215cf0ee12f082f858eb67292fe63f141a50e8906"} Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.641102 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bhnr9" podStartSLOduration=1.592821515 podStartE2EDuration="1.641082427s" podCreationTimestamp="2026-01-21 21:25:49 +0000 UTC" firstStartedPulling="2026-01-21 21:25:49.735635834 +0000 UTC m=+1041.957814314" lastFinishedPulling="2026-01-21 21:25:49.783896756 +0000 UTC m=+1042.006075226" observedRunningTime="2026-01-21 21:25:50.641048576 +0000 UTC m=+1042.863227066" watchObservedRunningTime="2026-01-21 21:25:50.641082427 +0000 UTC m=+1042.863260907" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.647122 4860 scope.go:117] "RemoveContainer" containerID="ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21" Jan 21 21:25:50 crc kubenswrapper[4860]: E0121 21:25:50.647693 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21\": container with ID starting with ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21 not found: ID does not exist" containerID="ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.649240 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21"} err="failed to get container status \"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21\": rpc error: code = NotFound desc = could not find container \"ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21\": container with ID starting with ca704054ead003f26a34b573ba59f2237f658320be9f2d02aaa90c1b1f2d2c21 not found: ID does not exist" Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.666092 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:50 crc kubenswrapper[4860]: I0121 21:25:50.673551 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-djq4t"] Jan 21 21:25:52 crc kubenswrapper[4860]: I0121 21:25:52.587953 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8413b898-c56f-4880-b823-455ea883379f" path="/var/lib/kubelet/pods/8413b898-c56f-4880-b823-455ea883379f/volumes" Jan 21 21:25:59 crc kubenswrapper[4860]: I0121 21:25:59.402674 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:59 crc kubenswrapper[4860]: I0121 21:25:59.403317 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:59 crc kubenswrapper[4860]: I0121 21:25:59.439830 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:25:59 crc kubenswrapper[4860]: I0121 21:25:59.715089 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bhnr9" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.761639 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s"] Jan 21 21:26:00 crc kubenswrapper[4860]: E0121 21:26:00.762062 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413b898-c56f-4880-b823-455ea883379f" containerName="registry-server" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.762081 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413b898-c56f-4880-b823-455ea883379f" containerName="registry-server" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.762248 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413b898-c56f-4880-b823-455ea883379f" containerName="registry-server" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.763534 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.768023 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xhwmg" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.791737 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s"] Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.893923 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cp5t\" (UniqueName: \"kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.894320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.894484 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.996142 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cp5t\" (UniqueName: \"kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.996226 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.996263 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.997091 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:00 crc kubenswrapper[4860]: I0121 21:26:00.997281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.021612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cp5t\" (UniqueName: \"kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t\") pod \"ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.086056 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.370654 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s"] Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.701084 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerID="4604fe7ebf5c05aad65dab30c8222f8c4389f132e7614b76a3c01c687d1de5a3" exitCode=0 Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.701154 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" event={"ID":"4d46ff7a-85e0-461a-aea5-d5b8f2d39634","Type":"ContainerDied","Data":"4604fe7ebf5c05aad65dab30c8222f8c4389f132e7614b76a3c01c687d1de5a3"} Jan 21 21:26:01 crc kubenswrapper[4860]: I0121 21:26:01.701440 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" event={"ID":"4d46ff7a-85e0-461a-aea5-d5b8f2d39634","Type":"ContainerStarted","Data":"af4eddbd8f37bb44b2a359b64462ae53074b61a2f5cbf2ac9cbdf9c5b0ae5b8d"} Jan 21 21:26:03 crc kubenswrapper[4860]: I0121 21:26:03.742481 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerID="100ca3dfd6bd2f1420438160ed788f959a5406b868494e7f98e8dfeda68628a0" exitCode=0 Jan 21 21:26:03 crc kubenswrapper[4860]: I0121 21:26:03.743051 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" event={"ID":"4d46ff7a-85e0-461a-aea5-d5b8f2d39634","Type":"ContainerDied","Data":"100ca3dfd6bd2f1420438160ed788f959a5406b868494e7f98e8dfeda68628a0"} Jan 21 21:26:04 crc kubenswrapper[4860]: I0121 21:26:04.754758 4860 generic.go:334] "Generic (PLEG): container finished" podID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerID="82abb4cc31baef5c29f3df26f2d1a7563c651af601d700af4544ed5a1f54a53a" exitCode=0 Jan 21 21:26:04 crc kubenswrapper[4860]: I0121 21:26:04.754830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" event={"ID":"4d46ff7a-85e0-461a-aea5-d5b8f2d39634","Type":"ContainerDied","Data":"82abb4cc31baef5c29f3df26f2d1a7563c651af601d700af4544ed5a1f54a53a"} Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.014090 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.080349 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util\") pod \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.080414 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle\") pod \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.080550 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cp5t\" (UniqueName: \"kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t\") pod \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\" (UID: \"4d46ff7a-85e0-461a-aea5-d5b8f2d39634\") " Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.081499 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle" (OuterVolumeSpecName: "bundle") pod "4d46ff7a-85e0-461a-aea5-d5b8f2d39634" (UID: "4d46ff7a-85e0-461a-aea5-d5b8f2d39634"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.086363 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t" (OuterVolumeSpecName: "kube-api-access-5cp5t") pod "4d46ff7a-85e0-461a-aea5-d5b8f2d39634" (UID: "4d46ff7a-85e0-461a-aea5-d5b8f2d39634"). InnerVolumeSpecName "kube-api-access-5cp5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.094420 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util" (OuterVolumeSpecName: "util") pod "4d46ff7a-85e0-461a-aea5-d5b8f2d39634" (UID: "4d46ff7a-85e0-461a-aea5-d5b8f2d39634"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.182321 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cp5t\" (UniqueName: \"kubernetes.io/projected/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-kube-api-access-5cp5t\") on node \"crc\" DevicePath \"\"" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.182369 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.182423 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d46ff7a-85e0-461a-aea5-d5b8f2d39634-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.769239 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" event={"ID":"4d46ff7a-85e0-461a-aea5-d5b8f2d39634","Type":"ContainerDied","Data":"af4eddbd8f37bb44b2a359b64462ae53074b61a2f5cbf2ac9cbdf9c5b0ae5b8d"} Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.769286 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4eddbd8f37bb44b2a359b64462ae53074b61a2f5cbf2ac9cbdf9c5b0ae5b8d" Jan 21 21:26:06 crc kubenswrapper[4860]: I0121 21:26:06.769365 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.137413 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:26:13 crc kubenswrapper[4860]: E0121 21:26:13.138623 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="pull" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.138643 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="pull" Jan 21 21:26:13 crc kubenswrapper[4860]: E0121 21:26:13.138659 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="util" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.138665 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="util" Jan 21 21:26:13 crc kubenswrapper[4860]: E0121 21:26:13.138676 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="extract" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.138682 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="extract" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.138816 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d46ff7a-85e0-461a-aea5-d5b8f2d39634" containerName="extract" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.139565 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.142973 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bgsln" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.166481 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.312761 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprg5\" (UniqueName: \"kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5\") pod \"openstack-operator-controller-init-657d864869-q6v9p\" (UID: \"00e7e600-d3e0-4dc7-9b65-48c39d9c2938\") " pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.414784 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprg5\" (UniqueName: \"kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5\") pod \"openstack-operator-controller-init-657d864869-q6v9p\" (UID: \"00e7e600-d3e0-4dc7-9b65-48c39d9c2938\") " pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.439047 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprg5\" (UniqueName: \"kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5\") pod \"openstack-operator-controller-init-657d864869-q6v9p\" (UID: \"00e7e600-d3e0-4dc7-9b65-48c39d9c2938\") " pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.468456 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:13 crc kubenswrapper[4860]: I0121 21:26:13.938274 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:26:14 crc kubenswrapper[4860]: I0121 21:26:14.849334 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" event={"ID":"00e7e600-d3e0-4dc7-9b65-48c39d9c2938","Type":"ContainerStarted","Data":"eba75e3a2ab20b88298fa76d026f1e9b02b5a484404d0e8a441379aa28efbf5a"} Jan 21 21:26:20 crc kubenswrapper[4860]: I0121 21:26:20.907399 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" event={"ID":"00e7e600-d3e0-4dc7-9b65-48c39d9c2938","Type":"ContainerStarted","Data":"fc5ed30b24d0bedfe847d4125b01b52dec47b3697f436189073226e792a27f1e"} Jan 21 21:26:20 crc kubenswrapper[4860]: I0121 21:26:20.908054 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:20 crc kubenswrapper[4860]: I0121 21:26:20.937113 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" podStartSLOduration=1.555026083 podStartE2EDuration="7.937086762s" podCreationTimestamp="2026-01-21 21:26:13 +0000 UTC" firstStartedPulling="2026-01-21 21:26:13.947823696 +0000 UTC m=+1066.170002166" lastFinishedPulling="2026-01-21 21:26:20.329884375 +0000 UTC m=+1072.552062845" observedRunningTime="2026-01-21 21:26:20.934352006 +0000 UTC m=+1073.156530486" watchObservedRunningTime="2026-01-21 21:26:20.937086762 +0000 UTC m=+1073.159265232" Jan 21 21:26:33 crc kubenswrapper[4860]: I0121 21:26:33.473148 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.478602 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.481173 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.484482 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qz26j" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.488086 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.489295 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.492255 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rjvc7" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.515136 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.522753 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.523641 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.527121 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j6dc8" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.539889 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.561542 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.564908 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.567124 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fntvg" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.569015 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.613294 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.626238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbp4\" (UniqueName: \"kubernetes.io/projected/2dd3e1b9-abea-4287-87e0-cb3f60423d54-kube-api-access-tsbp4\") pod \"cinder-operator-controller-manager-69cf5d4557-c95ps\" (UID: \"2dd3e1b9-abea-4287-87e0-cb3f60423d54\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.626313 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkq6\" (UniqueName: \"kubernetes.io/projected/404e97a3-3fcd-4ec0-a67d-53ed93d62685-kube-api-access-6bkq6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sslzp\" (UID: \"404e97a3-3fcd-4ec0-a67d-53ed93d62685\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.626340 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwzvr\" (UniqueName: \"kubernetes.io/projected/1a209a81-fb7b-4621-84db-567f96093a6b-kube-api-access-zwzvr\") pod \"designate-operator-controller-manager-b45d7bf98-vrvmq\" (UID: \"1a209a81-fb7b-4621-84db-567f96093a6b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.682330 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.715313 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.716109 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.717343 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.724654 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mrscg" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.724892 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fcg69" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.732824 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbp4\" (UniqueName: \"kubernetes.io/projected/2dd3e1b9-abea-4287-87e0-cb3f60423d54-kube-api-access-tsbp4\") pod \"cinder-operator-controller-manager-69cf5d4557-c95ps\" (UID: \"2dd3e1b9-abea-4287-87e0-cb3f60423d54\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.733018 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bkq6\" (UniqueName: \"kubernetes.io/projected/404e97a3-3fcd-4ec0-a67d-53ed93d62685-kube-api-access-6bkq6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sslzp\" (UID: \"404e97a3-3fcd-4ec0-a67d-53ed93d62685\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.733065 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwzvr\" (UniqueName: \"kubernetes.io/projected/1a209a81-fb7b-4621-84db-567f96093a6b-kube-api-access-zwzvr\") pod \"designate-operator-controller-manager-b45d7bf98-vrvmq\" (UID: \"1a209a81-fb7b-4621-84db-567f96093a6b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.733126 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrjr\" (UniqueName: \"kubernetes.io/projected/33a0c624-f40b-4d45-9b00-39c36c15d6bb-kube-api-access-5nrjr\") pod \"glance-operator-controller-manager-78fdd796fd-p7jg2\" (UID: \"33a0c624-f40b-4d45-9b00-39c36c15d6bb\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.797144 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bkq6\" (UniqueName: \"kubernetes.io/projected/404e97a3-3fcd-4ec0-a67d-53ed93d62685-kube-api-access-6bkq6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sslzp\" (UID: \"404e97a3-3fcd-4ec0-a67d-53ed93d62685\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.797907 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbp4\" (UniqueName: \"kubernetes.io/projected/2dd3e1b9-abea-4287-87e0-cb3f60423d54-kube-api-access-tsbp4\") pod \"cinder-operator-controller-manager-69cf5d4557-c95ps\" (UID: \"2dd3e1b9-abea-4287-87e0-cb3f60423d54\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.804025 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.810110 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.812923 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwzvr\" (UniqueName: \"kubernetes.io/projected/1a209a81-fb7b-4621-84db-567f96093a6b-kube-api-access-zwzvr\") pod \"designate-operator-controller-manager-b45d7bf98-vrvmq\" (UID: \"1a209a81-fb7b-4621-84db-567f96093a6b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.819363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.829039 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.829525 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.830155 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.836381 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rdr9f" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.836577 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.836982 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvhf\" (UniqueName: \"kubernetes.io/projected/084bba8e-36e4-4e04-8109-4b0f6f97d37f-kube-api-access-tqvhf\") pod \"horizon-operator-controller-manager-77d5c5b54f-pvq7t\" (UID: \"084bba8e-36e4-4e04-8109-4b0f6f97d37f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.837042 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjjq\" (UniqueName: \"kubernetes.io/projected/f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85-kube-api-access-rbjjq\") pod \"heat-operator-controller-manager-594c8c9d5d-b29tb\" (UID: \"f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.837082 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrjr\" (UniqueName: \"kubernetes.io/projected/33a0c624-f40b-4d45-9b00-39c36c15d6bb-kube-api-access-5nrjr\") pod \"glance-operator-controller-manager-78fdd796fd-p7jg2\" (UID: \"33a0c624-f40b-4d45-9b00-39c36c15d6bb\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.843158 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.857044 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.874703 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.876140 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.880958 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8ttrd" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.920407 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.927251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrjr\" (UniqueName: \"kubernetes.io/projected/33a0c624-f40b-4d45-9b00-39c36c15d6bb-kube-api-access-5nrjr\") pod \"glance-operator-controller-manager-78fdd796fd-p7jg2\" (UID: \"33a0c624-f40b-4d45-9b00-39c36c15d6bb\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.938949 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvhf\" (UniqueName: \"kubernetes.io/projected/084bba8e-36e4-4e04-8109-4b0f6f97d37f-kube-api-access-tqvhf\") pod \"horizon-operator-controller-manager-77d5c5b54f-pvq7t\" (UID: \"084bba8e-36e4-4e04-8109-4b0f6f97d37f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.939007 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjjq\" (UniqueName: \"kubernetes.io/projected/f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85-kube-api-access-rbjjq\") pod \"heat-operator-controller-manager-594c8c9d5d-b29tb\" (UID: \"f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.939042 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5trfh\" (UniqueName: \"kubernetes.io/projected/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-kube-api-access-5trfh\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.939089 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.951049 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.952325 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.960248 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf"] Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.961764 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p2dcv" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.963261 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.965418 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvhf\" (UniqueName: \"kubernetes.io/projected/084bba8e-36e4-4e04-8109-4b0f6f97d37f-kube-api-access-tqvhf\") pod \"horizon-operator-controller-manager-77d5c5b54f-pvq7t\" (UID: \"084bba8e-36e4-4e04-8109-4b0f6f97d37f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.967779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjjq\" (UniqueName: \"kubernetes.io/projected/f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85-kube-api-access-rbjjq\") pod \"heat-operator-controller-manager-594c8c9d5d-b29tb\" (UID: \"f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.979756 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8krtl" Jan 21 21:26:54 crc kubenswrapper[4860]: I0121 21:26:54.994748 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.013411 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.048036 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd862\" (UniqueName: \"kubernetes.io/projected/d107aacb-3e12-43fd-a68c-2a6b2c10295c-kube-api-access-qd862\") pod \"ironic-operator-controller-manager-69d6c9f5b8-ldzzc\" (UID: \"d107aacb-3e12-43fd-a68c-2a6b2c10295c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.048617 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5trfh\" (UniqueName: \"kubernetes.io/projected/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-kube-api-access-5trfh\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.048742 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.048800 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.049176 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.049322 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert podName:3d5ae9ad-1309-4221-b99a-86b9e5aa075b nodeName:}" failed. No retries permitted until 2026-01-21 21:26:55.549269855 +0000 UTC m=+1107.771448325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert") pod "infra-operator-controller-manager-54ccf4f85d-8hx7p" (UID: "3d5ae9ad-1309-4221-b99a-86b9e5aa075b") : secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.050866 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.058617 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.066657 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.071291 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gtlfl" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.109539 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.132778 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5trfh\" (UniqueName: \"kubernetes.io/projected/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-kube-api-access-5trfh\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.154176 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmn7p\" (UniqueName: \"kubernetes.io/projected/4f7ce297-eef0-4067-bd7b-1bb64ced0239-kube-api-access-hmn7p\") pod \"mariadb-operator-controller-manager-c87fff755-w857v\" (UID: \"4f7ce297-eef0-4067-bd7b-1bb64ced0239\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.154247 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5t65\" (UniqueName: \"kubernetes.io/projected/96503e13-4e73-4048-be57-01a726c114da-kube-api-access-j5t65\") pod \"keystone-operator-controller-manager-b8b6d4659-4vpgf\" (UID: \"96503e13-4e73-4048-be57-01a726c114da\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.154832 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd862\" (UniqueName: \"kubernetes.io/projected/d107aacb-3e12-43fd-a68c-2a6b2c10295c-kube-api-access-qd862\") pod \"ironic-operator-controller-manager-69d6c9f5b8-ldzzc\" (UID: \"d107aacb-3e12-43fd-a68c-2a6b2c10295c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.154894 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9vrz\" (UniqueName: \"kubernetes.io/projected/519cbf74-c4d7-425b-837d-afbb85f3ecc4-kube-api-access-c9vrz\") pod \"manila-operator-controller-manager-78c6999f6f-w6jg6\" (UID: \"519cbf74-c4d7-425b-837d-afbb85f3ecc4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.178106 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.179744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.184966 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dh8bm" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.191125 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.192451 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.192445 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.204760 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-b5wwc" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.230089 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd862\" (UniqueName: \"kubernetes.io/projected/d107aacb-3e12-43fd-a68c-2a6b2c10295c-kube-api-access-qd862\") pod \"ironic-operator-controller-manager-69d6c9f5b8-ldzzc\" (UID: \"d107aacb-3e12-43fd-a68c-2a6b2c10295c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.252920 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.259237 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggh8t\" (UniqueName: \"kubernetes.io/projected/626c3db6-f60f-472b-b0e5-0834b5bded25-kube-api-access-ggh8t\") pod \"neutron-operator-controller-manager-5d8f59fb49-8mv6c\" (UID: \"626c3db6-f60f-472b-b0e5-0834b5bded25\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.259308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9vrz\" (UniqueName: \"kubernetes.io/projected/519cbf74-c4d7-425b-837d-afbb85f3ecc4-kube-api-access-c9vrz\") pod \"manila-operator-controller-manager-78c6999f6f-w6jg6\" (UID: \"519cbf74-c4d7-425b-837d-afbb85f3ecc4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.259395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vf7p\" (UniqueName: \"kubernetes.io/projected/69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5-kube-api-access-7vf7p\") pod \"nova-operator-controller-manager-6b8bc8d87d-nn25n\" (UID: \"69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.259461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmn7p\" (UniqueName: \"kubernetes.io/projected/4f7ce297-eef0-4067-bd7b-1bb64ced0239-kube-api-access-hmn7p\") pod \"mariadb-operator-controller-manager-c87fff755-w857v\" (UID: \"4f7ce297-eef0-4067-bd7b-1bb64ced0239\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.259502 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5t65\" (UniqueName: \"kubernetes.io/projected/96503e13-4e73-4048-be57-01a726c114da-kube-api-access-j5t65\") pod \"keystone-operator-controller-manager-b8b6d4659-4vpgf\" (UID: \"96503e13-4e73-4048-be57-01a726c114da\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.295086 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.303056 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.321968 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5t65\" (UniqueName: \"kubernetes.io/projected/96503e13-4e73-4048-be57-01a726c114da-kube-api-access-j5t65\") pod \"keystone-operator-controller-manager-b8b6d4659-4vpgf\" (UID: \"96503e13-4e73-4048-be57-01a726c114da\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.324771 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9vrz\" (UniqueName: \"kubernetes.io/projected/519cbf74-c4d7-425b-837d-afbb85f3ecc4-kube-api-access-c9vrz\") pod \"manila-operator-controller-manager-78c6999f6f-w6jg6\" (UID: \"519cbf74-c4d7-425b-837d-afbb85f3ecc4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.325255 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.334921 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmn7p\" (UniqueName: \"kubernetes.io/projected/4f7ce297-eef0-4067-bd7b-1bb64ced0239-kube-api-access-hmn7p\") pod \"mariadb-operator-controller-manager-c87fff755-w857v\" (UID: \"4f7ce297-eef0-4067-bd7b-1bb64ced0239\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.363266 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.370151 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggh8t\" (UniqueName: \"kubernetes.io/projected/626c3db6-f60f-472b-b0e5-0834b5bded25-kube-api-access-ggh8t\") pod \"neutron-operator-controller-manager-5d8f59fb49-8mv6c\" (UID: \"626c3db6-f60f-472b-b0e5-0834b5bded25\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.370237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vf7p\" (UniqueName: \"kubernetes.io/projected/69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5-kube-api-access-7vf7p\") pod \"nova-operator-controller-manager-6b8bc8d87d-nn25n\" (UID: \"69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.399889 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.416632 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.431253 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vf7p\" (UniqueName: \"kubernetes.io/projected/69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5-kube-api-access-7vf7p\") pod \"nova-operator-controller-manager-6b8bc8d87d-nn25n\" (UID: \"69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.432271 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.437323 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggh8t\" (UniqueName: \"kubernetes.io/projected/626c3db6-f60f-472b-b0e5-0834b5bded25-kube-api-access-ggh8t\") pod \"neutron-operator-controller-manager-5d8f59fb49-8mv6c\" (UID: \"626c3db6-f60f-472b-b0e5-0834b5bded25\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.447036 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-r2hl7" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.450275 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.473030 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hs8\" (UniqueName: \"kubernetes.io/projected/adcb4b85-f016-45ed-8029-7191ade5683a-kube-api-access-k2hs8\") pod \"octavia-operator-controller-manager-7bd9774b6-q8wm8\" (UID: \"adcb4b85-f016-45ed-8029-7191ade5683a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.476696 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.480701 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.485117 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gwzbs" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.545482 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.547088 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.551229 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6q7vm" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.558969 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.560462 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.560560 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.567619 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.573126 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.577088 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jtg9p" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.582863 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.582950 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.583041 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4wh\" (UniqueName: \"kubernetes.io/projected/a5eceab3-1171-484d-91da-990d323440d4-kube-api-access-bg4wh\") pod \"ovn-operator-controller-manager-55db956ddc-nbvmh\" (UID: \"a5eceab3-1171-484d-91da-990d323440d4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.583077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hs8\" (UniqueName: \"kubernetes.io/projected/adcb4b85-f016-45ed-8029-7191ade5683a-kube-api-access-k2hs8\") pod \"octavia-operator-controller-manager-7bd9774b6-q8wm8\" (UID: \"adcb4b85-f016-45ed-8029-7191ade5683a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.583129 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhb5b\" (UniqueName: \"kubernetes.io/projected/9731b174-d203-4170-b49f-0de94000f154-kube-api-access-rhb5b\") pod \"placement-operator-controller-manager-5d646b7d76-m892h\" (UID: \"9731b174-d203-4170-b49f-0de94000f154\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.583197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4z6\" (UniqueName: \"kubernetes.io/projected/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-kube-api-access-zl4z6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.583764 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.583951 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert podName:3d5ae9ad-1309-4221-b99a-86b9e5aa075b nodeName:}" failed. No retries permitted until 2026-01-21 21:26:56.583904107 +0000 UTC m=+1108.806082577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert") pod "infra-operator-controller-manager-54ccf4f85d-8hx7p" (UID: "3d5ae9ad-1309-4221-b99a-86b9e5aa075b") : secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.654016 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.674621 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.687320 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hs8\" (UniqueName: \"kubernetes.io/projected/adcb4b85-f016-45ed-8029-7191ade5683a-kube-api-access-k2hs8\") pod \"octavia-operator-controller-manager-7bd9774b6-q8wm8\" (UID: \"adcb4b85-f016-45ed-8029-7191ade5683a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.695305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.695651 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4wh\" (UniqueName: \"kubernetes.io/projected/a5eceab3-1171-484d-91da-990d323440d4-kube-api-access-bg4wh\") pod \"ovn-operator-controller-manager-55db956ddc-nbvmh\" (UID: \"a5eceab3-1171-484d-91da-990d323440d4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.695710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhb5b\" (UniqueName: \"kubernetes.io/projected/9731b174-d203-4170-b49f-0de94000f154-kube-api-access-rhb5b\") pod \"placement-operator-controller-manager-5d646b7d76-m892h\" (UID: \"9731b174-d203-4170-b49f-0de94000f154\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.695764 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4z6\" (UniqueName: \"kubernetes.io/projected/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-kube-api-access-zl4z6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.703269 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: E0121 21:26:55.703388 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:56.203362766 +0000 UTC m=+1108.425541236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.719990 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.731717 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.796103 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q2g4c" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.799061 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4wh\" (UniqueName: \"kubernetes.io/projected/a5eceab3-1171-484d-91da-990d323440d4-kube-api-access-bg4wh\") pod \"ovn-operator-controller-manager-55db956ddc-nbvmh\" (UID: \"a5eceab3-1171-484d-91da-990d323440d4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.799671 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.800517 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9"] Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.810217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4z6\" (UniqueName: \"kubernetes.io/projected/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-kube-api-access-zl4z6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.866391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhb5b\" (UniqueName: \"kubernetes.io/projected/9731b174-d203-4170-b49f-0de94000f154-kube-api-access-rhb5b\") pod \"placement-operator-controller-manager-5d646b7d76-m892h\" (UID: \"9731b174-d203-4170-b49f-0de94000f154\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.887747 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.914840 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7xg\" (UniqueName: \"kubernetes.io/projected/b4019683-a628-42e6-91ba-1cb0505326e3-kube-api-access-hw7xg\") pod \"swift-operator-controller-manager-547cbdb99f-pv9x9\" (UID: \"b4019683-a628-42e6-91ba-1cb0505326e3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:26:55 crc kubenswrapper[4860]: I0121 21:26:55.935842 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.084317 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.110265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7xg\" (UniqueName: \"kubernetes.io/projected/b4019683-a628-42e6-91ba-1cb0505326e3-kube-api-access-hw7xg\") pod \"swift-operator-controller-manager-547cbdb99f-pv9x9\" (UID: \"b4019683-a628-42e6-91ba-1cb0505326e3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.142440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7xg\" (UniqueName: \"kubernetes.io/projected/b4019683-a628-42e6-91ba-1cb0505326e3-kube-api-access-hw7xg\") pod \"swift-operator-controller-manager-547cbdb99f-pv9x9\" (UID: \"b4019683-a628-42e6-91ba-1cb0505326e3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.151054 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.151100 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.151868 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.152077 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.170457 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.181538 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.187452 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.188557 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.203907 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.211678 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.211911 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.212004 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:57.211981744 +0000 UTC m=+1109.434160214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.227402 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ccs4g" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.227645 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8m984" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.228301 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jdkzp" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.255086 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.256217 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.259396 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.259536 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.259651 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6b7wn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.300560 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.306771 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.308197 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.312854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6gc\" (UniqueName: \"kubernetes.io/projected/3f367ab5-2df3-466b-8ec4-7c4f23dcc578-kube-api-access-7k6gc\") pod \"test-operator-controller-manager-69797bbcbd-tldvn\" (UID: \"3f367ab5-2df3-466b-8ec4-7c4f23dcc578\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.312890 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjtd\" (UniqueName: \"kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd\") pod \"watcher-operator-controller-manager-674d7f6576-jn79v\" (UID: \"38566005-2062-4d80-a44a-11976396a2aa\") " pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.312919 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx22n\" (UniqueName: \"kubernetes.io/projected/61a273d5-b25c-4729-8736-9965ac435468-kube-api-access-nx22n\") pod \"telemetry-operator-controller-manager-85cd9769bb-bk9sb\" (UID: \"61a273d5-b25c-4729-8736-9965ac435468\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.319098 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.321123 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vgtnj" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.380214 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419201 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb2w\" (UniqueName: \"kubernetes.io/projected/8dad99b9-0de7-450d-8c58-96590671dd98-kube-api-access-dpb2w\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419492 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj755\" (UniqueName: \"kubernetes.io/projected/93010989-aa15-487c-b470-919932329af1-kube-api-access-kj755\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpknx\" (UID: \"93010989-aa15-487c-b470-919932329af1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6gc\" (UniqueName: \"kubernetes.io/projected/3f367ab5-2df3-466b-8ec4-7c4f23dcc578-kube-api-access-7k6gc\") pod \"test-operator-controller-manager-69797bbcbd-tldvn\" (UID: \"3f367ab5-2df3-466b-8ec4-7c4f23dcc578\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjtd\" (UniqueName: \"kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd\") pod \"watcher-operator-controller-manager-674d7f6576-jn79v\" (UID: \"38566005-2062-4d80-a44a-11976396a2aa\") " pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.419690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx22n\" (UniqueName: \"kubernetes.io/projected/61a273d5-b25c-4729-8736-9965ac435468-kube-api-access-nx22n\") pod \"telemetry-operator-controller-manager-85cd9769bb-bk9sb\" (UID: \"61a273d5-b25c-4729-8736-9965ac435468\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.430734 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.449727 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6gc\" (UniqueName: \"kubernetes.io/projected/3f367ab5-2df3-466b-8ec4-7c4f23dcc578-kube-api-access-7k6gc\") pod \"test-operator-controller-manager-69797bbcbd-tldvn\" (UID: \"3f367ab5-2df3-466b-8ec4-7c4f23dcc578\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.458298 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjtd\" (UniqueName: \"kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd\") pod \"watcher-operator-controller-manager-674d7f6576-jn79v\" (UID: \"38566005-2062-4d80-a44a-11976396a2aa\") " pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.458438 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx22n\" (UniqueName: \"kubernetes.io/projected/61a273d5-b25c-4729-8736-9965ac435468-kube-api-access-nx22n\") pod \"telemetry-operator-controller-manager-85cd9769bb-bk9sb\" (UID: \"61a273d5-b25c-4729-8736-9965ac435468\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.462339 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.490578 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.510907 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.521567 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpb2w\" (UniqueName: \"kubernetes.io/projected/8dad99b9-0de7-450d-8c58-96590671dd98-kube-api-access-dpb2w\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.521639 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.521668 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.521692 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj755\" (UniqueName: \"kubernetes.io/projected/93010989-aa15-487c-b470-919932329af1-kube-api-access-kj755\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpknx\" (UID: \"93010989-aa15-487c-b470-919932329af1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.524200 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.524320 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:57.024292334 +0000 UTC m=+1109.246470804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.536238 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.544549 4860 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.544782 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:57.044657956 +0000 UTC m=+1109.266836416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "metrics-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.590822 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj755\" (UniqueName: \"kubernetes.io/projected/93010989-aa15-487c-b470-919932329af1-kube-api-access-kj755\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpknx\" (UID: \"93010989-aa15-487c-b470-919932329af1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.609494 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpb2w\" (UniqueName: \"kubernetes.io/projected/8dad99b9-0de7-450d-8c58-96590671dd98-kube-api-access-dpb2w\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.625450 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.627127 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.635660 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: E0121 21:26:56.635731 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert podName:3d5ae9ad-1309-4221-b99a-86b9e5aa075b nodeName:}" failed. No retries permitted until 2026-01-21 21:26:58.635713137 +0000 UTC m=+1110.857891597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert") pod "infra-operator-controller-manager-54ccf4f85d-8hx7p" (UID: "3d5ae9ad-1309-4221-b99a-86b9e5aa075b") : secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.675878 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq"] Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.676920 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps"] Jan 21 21:26:56 crc kubenswrapper[4860]: W0121 21:26:56.755161 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd3e1b9_abea_4287_87e0_cb3f60423d54.slice/crio-514b8e9bfe4873af653efc37b558e5fe3680a8d05f733fc57be3ea6e574a2d4d WatchSource:0}: Error finding container 514b8e9bfe4873af653efc37b558e5fe3680a8d05f733fc57be3ea6e574a2d4d: Status 404 returned error can't find the container with id 514b8e9bfe4873af653efc37b558e5fe3680a8d05f733fc57be3ea6e574a2d4d Jan 21 21:26:56 crc kubenswrapper[4860]: I0121 21:26:56.988240 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.053129 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.053232 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.053364 4860 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.053446 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.053467 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:58.053442588 +0000 UTC m=+1110.275621058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "metrics-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.053538 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:58.053511991 +0000 UTC m=+1110.275690461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.255138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.255750 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: E0121 21:26:57.255808 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:26:59.25579176 +0000 UTC m=+1111.477970230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.385828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" event={"ID":"2dd3e1b9-abea-4287-87e0-cb3f60423d54","Type":"ContainerStarted","Data":"514b8e9bfe4873af653efc37b558e5fe3680a8d05f733fc57be3ea6e574a2d4d"} Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.399894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" event={"ID":"f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85","Type":"ContainerStarted","Data":"8c50cb77cc524117f43d94ebdedc2f077c6a549fb3b702cd32553c8835898636"} Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.401486 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" event={"ID":"1a209a81-fb7b-4621-84db-567f96093a6b","Type":"ContainerStarted","Data":"49b6855970f24179c32a1023510092e92d4b01093eeb825b597ec9970552ea40"} Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.408332 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" event={"ID":"404e97a3-3fcd-4ec0-a67d-53ed93d62685","Type":"ContainerStarted","Data":"88e7987420a863ef822265eb03aad290455c8bd930ee4c543d12b48785ccc857"} Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.432781 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.861074 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.871840 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.890234 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2"] Jan 21 21:26:57 crc kubenswrapper[4860]: W0121 21:26:57.897867 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod519cbf74_c4d7_425b_837d_afbb85f3ecc4.slice/crio-ce2a479ac999309dbb4175b7372362719c5602ef2cc3ca48964418363afbe44a WatchSource:0}: Error finding container ce2a479ac999309dbb4175b7372362719c5602ef2cc3ca48964418363afbe44a: Status 404 returned error can't find the container with id ce2a479ac999309dbb4175b7372362719c5602ef2cc3ca48964418363afbe44a Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.902025 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.918541 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v"] Jan 21 21:26:57 crc kubenswrapper[4860]: W0121 21:26:57.934025 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7ce297_eef0_4067_bd7b_1bb64ced0239.slice/crio-39c7c80b07ffd5c48736b9174976ab42498cac4e1c096464f195ef38a44ebbfe WatchSource:0}: Error finding container 39c7c80b07ffd5c48736b9174976ab42498cac4e1c096464f195ef38a44ebbfe: Status 404 returned error can't find the container with id 39c7c80b07ffd5c48736b9174976ab42498cac4e1c096464f195ef38a44ebbfe Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.937093 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.952131 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.969277 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c"] Jan 21 21:26:57 crc kubenswrapper[4860]: I0121 21:26:57.989168 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8"] Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.029175 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n"] Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.051148 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh"] Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.061885 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn"] Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.067944 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx"] Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.071982 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.072120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.072274 4860 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.072336 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:00.072319373 +0000 UTC m=+1112.294497843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "metrics-server-cert" not found Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.072464 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.072591 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:00.07255935 +0000 UTC m=+1112.294737820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.080372 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb"] Jan 21 21:26:58 crc kubenswrapper[4860]: W0121 21:26:58.082478 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69b9fdd7_ae64_4756_ad1c_27de6ec5ffb5.slice/crio-604918ff666a828cb8625020623848501a76b4b8043e169e35266b978050f310 WatchSource:0}: Error finding container 604918ff666a828cb8625020623848501a76b4b8043e169e35266b978050f310: Status 404 returned error can't find the container with id 604918ff666a828cb8625020623848501a76b4b8043e169e35266b978050f310 Jan 21 21:26:58 crc kubenswrapper[4860]: W0121 21:26:58.085840 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93010989_aa15_487c_b470_919932329af1.slice/crio-9f294b292ed1c29e2e30bb2cf979d06ddd38163a0c65bbf5dfa8e569ca7f61da WatchSource:0}: Error finding container 9f294b292ed1c29e2e30bb2cf979d06ddd38163a0c65bbf5dfa8e569ca7f61da: Status 404 returned error can't find the container with id 9f294b292ed1c29e2e30bb2cf979d06ddd38163a0c65bbf5dfa8e569ca7f61da Jan 21 21:26:58 crc kubenswrapper[4860]: W0121 21:26:58.091307 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f367ab5_2df3_466b_8ec4_7c4f23dcc578.slice/crio-f966255d1ce7c260e79c435c9a4c8d8dc578727ec60af378dc81ca3719d1acb2 WatchSource:0}: Error finding container f966255d1ce7c260e79c435c9a4c8d8dc578727ec60af378dc81ca3719d1acb2: Status 404 returned error can't find the container with id f966255d1ce7c260e79c435c9a4c8d8dc578727ec60af378dc81ca3719d1acb2 Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.091895 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vf7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-nn25n_openstack-operators(69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.093056 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podUID="69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.093736 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7k6gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-tldvn_openstack-operators(3f367ab5-2df3-466b-8ec4-7c4f23dcc578): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.093878 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mpknx_openstack-operators(93010989-aa15-487c-b470-919932329af1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.098368 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podUID="93010989-aa15-487c-b470-919932329af1" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.098452 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" podUID="3f367ab5-2df3-466b-8ec4-7c4f23dcc578" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.101965 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.104552 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pjtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-674d7f6576-jn79v_openstack-operators(38566005-2062-4d80-a44a-11976396a2aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.107176 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.110837 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nx22n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-bk9sb_openstack-operators(61a273d5-b25c-4729-8736-9965ac435468): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.112234 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" podUID="61a273d5-b25c-4729-8736-9965ac435468" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.433092 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" event={"ID":"93010989-aa15-487c-b470-919932329af1","Type":"ContainerStarted","Data":"9f294b292ed1c29e2e30bb2cf979d06ddd38163a0c65bbf5dfa8e569ca7f61da"} Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.439567 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podUID="93010989-aa15-487c-b470-919932329af1" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.443613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" event={"ID":"626c3db6-f60f-472b-b0e5-0834b5bded25","Type":"ContainerStarted","Data":"bb9e301e609cd155762c98108c1928b3c7aa2d53e64b5fe5f9c935d146a1f483"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.446669 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" event={"ID":"33a0c624-f40b-4d45-9b00-39c36c15d6bb","Type":"ContainerStarted","Data":"7a630c3dc3902a5c3ce1f9da1946889470e6b42b7b389944d8db54d9ea625ee8"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.449067 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" event={"ID":"adcb4b85-f016-45ed-8029-7191ade5683a","Type":"ContainerStarted","Data":"d9385cc7ae33a27dccbec734279d4c5241416228f189265e6ea3a0fe367e6acc"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.451747 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" event={"ID":"b4019683-a628-42e6-91ba-1cb0505326e3","Type":"ContainerStarted","Data":"94bb1eef76489eb2d8f9f5f7cba8b0f9118cfc9be75bf7a35db132f84fbc1360"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.453861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" event={"ID":"69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5","Type":"ContainerStarted","Data":"604918ff666a828cb8625020623848501a76b4b8043e169e35266b978050f310"} Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.455600 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podUID="69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.456787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" event={"ID":"38566005-2062-4d80-a44a-11976396a2aa","Type":"ContainerStarted","Data":"88f4a2f2c3364d299f52b2e0b308e533be07d1f660a12c9fbd851007fdafe3f3"} Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.465065 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.467684 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" event={"ID":"519cbf74-c4d7-425b-837d-afbb85f3ecc4","Type":"ContainerStarted","Data":"ce2a479ac999309dbb4175b7372362719c5602ef2cc3ca48964418363afbe44a"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.469341 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" event={"ID":"9731b174-d203-4170-b49f-0de94000f154","Type":"ContainerStarted","Data":"a78056229f5773d6cc4a4f1c0b783f6a47d4adae18a77719acb507bc8f29f755"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.472478 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" event={"ID":"96503e13-4e73-4048-be57-01a726c114da","Type":"ContainerStarted","Data":"e2ebe16d44ef0a3cb584af320a99b76447b1eb4632681e08d3644a5da13cd0fb"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.478533 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" event={"ID":"084bba8e-36e4-4e04-8109-4b0f6f97d37f","Type":"ContainerStarted","Data":"4ed1f154dfc433bd0bf9a1faf6ba5d1160672b186c09f0cf760449c2b8d57ffc"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.484680 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" event={"ID":"4f7ce297-eef0-4067-bd7b-1bb64ced0239","Type":"ContainerStarted","Data":"39c7c80b07ffd5c48736b9174976ab42498cac4e1c096464f195ef38a44ebbfe"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.487724 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" event={"ID":"a5eceab3-1171-484d-91da-990d323440d4","Type":"ContainerStarted","Data":"07efa8a938ded0080320a10d61b74fb8a66f2d0a7bf55fbbc76e9cf09ff04df7"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.489387 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" event={"ID":"3f367ab5-2df3-466b-8ec4-7c4f23dcc578","Type":"ContainerStarted","Data":"f966255d1ce7c260e79c435c9a4c8d8dc578727ec60af378dc81ca3719d1acb2"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.491118 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" event={"ID":"d107aacb-3e12-43fd-a68c-2a6b2c10295c","Type":"ContainerStarted","Data":"6a1073f9e76a15c687fae0b75d613b0cf6ba2457cc73453e689ee0c7acec30d4"} Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.495792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" event={"ID":"61a273d5-b25c-4729-8736-9965ac435468","Type":"ContainerStarted","Data":"86c17241949ee52c68a16fd28462af549e479156dd0f0819082c0941a7e18bf7"} Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.500302 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" podUID="3f367ab5-2df3-466b-8ec4-7c4f23dcc578" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.504297 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" podUID="61a273d5-b25c-4729-8736-9965ac435468" Jan 21 21:26:58 crc kubenswrapper[4860]: I0121 21:26:58.716018 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.716307 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:58 crc kubenswrapper[4860]: E0121 21:26:58.716421 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert podName:3d5ae9ad-1309-4221-b99a-86b9e5aa075b nodeName:}" failed. No retries permitted until 2026-01-21 21:27:02.716396378 +0000 UTC m=+1114.938574848 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert") pod "infra-operator-controller-manager-54ccf4f85d-8hx7p" (UID: "3d5ae9ad-1309-4221-b99a-86b9e5aa075b") : secret "infra-operator-webhook-server-cert" not found Jan 21 21:26:59 crc kubenswrapper[4860]: I0121 21:26:59.326088 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.326239 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.326301 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:03.326283809 +0000 UTC m=+1115.548462269 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.505685 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podUID="93010989-aa15-487c-b470-919932329af1" Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.506478 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" podUID="3f367ab5-2df3-466b-8ec4-7c4f23dcc578" Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.506639 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.506720 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" podUID="61a273d5-b25c-4729-8736-9965ac435468" Jan 21 21:26:59 crc kubenswrapper[4860]: E0121 21:26:59.506842 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podUID="69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5" Jan 21 21:27:00 crc kubenswrapper[4860]: I0121 21:27:00.254653 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:00 crc kubenswrapper[4860]: I0121 21:27:00.254850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:00 crc kubenswrapper[4860]: E0121 21:27:00.255070 4860 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 21:27:00 crc kubenswrapper[4860]: E0121 21:27:00.255076 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:27:00 crc kubenswrapper[4860]: E0121 21:27:00.255143 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:04.255121724 +0000 UTC m=+1116.477300194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "metrics-server-cert" not found Jan 21 21:27:00 crc kubenswrapper[4860]: E0121 21:27:00.255446 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:04.255375852 +0000 UTC m=+1116.477554362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:27:02 crc kubenswrapper[4860]: I0121 21:27:02.104354 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:27:02 crc kubenswrapper[4860]: I0121 21:27:02.104715 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:27:02 crc kubenswrapper[4860]: I0121 21:27:02.788521 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:02 crc kubenswrapper[4860]: E0121 21:27:02.788777 4860 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 21:27:02 crc kubenswrapper[4860]: E0121 21:27:02.788854 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert podName:3d5ae9ad-1309-4221-b99a-86b9e5aa075b nodeName:}" failed. No retries permitted until 2026-01-21 21:27:10.788831924 +0000 UTC m=+1123.011010394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert") pod "infra-operator-controller-manager-54ccf4f85d-8hx7p" (UID: "3d5ae9ad-1309-4221-b99a-86b9e5aa075b") : secret "infra-operator-webhook-server-cert" not found Jan 21 21:27:03 crc kubenswrapper[4860]: I0121 21:27:03.397830 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:03 crc kubenswrapper[4860]: E0121 21:27:03.398061 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:27:03 crc kubenswrapper[4860]: E0121 21:27:03.398610 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:11.3985855 +0000 UTC m=+1123.620763980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:27:04 crc kubenswrapper[4860]: I0121 21:27:04.300325 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:04 crc kubenswrapper[4860]: I0121 21:27:04.300776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:04 crc kubenswrapper[4860]: E0121 21:27:04.300556 4860 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 21:27:04 crc kubenswrapper[4860]: E0121 21:27:04.301095 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:12.301076639 +0000 UTC m=+1124.523255109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "metrics-server-cert" not found Jan 21 21:27:04 crc kubenswrapper[4860]: E0121 21:27:04.301019 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:27:04 crc kubenswrapper[4860]: E0121 21:27:04.301608 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:12.301572854 +0000 UTC m=+1124.523751324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:27:10 crc kubenswrapper[4860]: E0121 21:27:10.649447 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 21 21:27:10 crc kubenswrapper[4860]: E0121 21:27:10.650831 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hmn7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-w857v_openstack-operators(4f7ce297-eef0-4067-bd7b-1bb64ced0239): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:10 crc kubenswrapper[4860]: E0121 21:27:10.652118 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" podUID="4f7ce297-eef0-4067-bd7b-1bb64ced0239" Jan 21 21:27:10 crc kubenswrapper[4860]: I0121 21:27:10.831819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:10 crc kubenswrapper[4860]: I0121 21:27:10.841011 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5ae9ad-1309-4221-b99a-86b9e5aa075b-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-8hx7p\" (UID: \"3d5ae9ad-1309-4221-b99a-86b9e5aa075b\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:10 crc kubenswrapper[4860]: I0121 21:27:10.858818 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:10 crc kubenswrapper[4860]: E0121 21:27:10.876352 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" podUID="4f7ce297-eef0-4067-bd7b-1bb64ced0239" Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.433522 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.434745 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbjjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-b29tb_openstack-operators(f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.436296 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" podUID="f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85" Jan 21 21:27:11 crc kubenswrapper[4860]: I0121 21:27:11.442880 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.443342 4860 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.443479 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert podName:95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:27.443438894 +0000 UTC m=+1139.665617364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854787gn" (UID: "95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 21:27:11 crc kubenswrapper[4860]: E0121 21:27:11.884308 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" podUID="f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85" Jan 21 21:27:12 crc kubenswrapper[4860]: I0121 21:27:12.359095 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:12 crc kubenswrapper[4860]: I0121 21:27:12.359175 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.359399 4860 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.359466 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs podName:8dad99b9-0de7-450d-8c58-96590671dd98 nodeName:}" failed. No retries permitted until 2026-01-21 21:27:28.359447806 +0000 UTC m=+1140.581626276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs") pod "openstack-operator-controller-manager-6c98596b-6jfrl" (UID: "8dad99b9-0de7-450d-8c58-96590671dd98") : secret "webhook-server-cert" not found Jan 21 21:27:12 crc kubenswrapper[4860]: I0121 21:27:12.367398 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-metrics-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.568550 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.569152 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nrjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-p7jg2_openstack-operators(33a0c624-f40b-4d45-9b00-39c36c15d6bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.571161 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" podUID="33a0c624-f40b-4d45-9b00-39c36c15d6bb" Jan 21 21:27:12 crc kubenswrapper[4860]: E0121 21:27:12.889647 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" podUID="33a0c624-f40b-4d45-9b00-39c36c15d6bb" Jan 21 21:27:13 crc kubenswrapper[4860]: E0121 21:27:13.330136 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 21 21:27:13 crc kubenswrapper[4860]: E0121 21:27:13.330653 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qd862,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-ldzzc_openstack-operators(d107aacb-3e12-43fd-a68c-2a6b2c10295c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:13 crc kubenswrapper[4860]: E0121 21:27:13.331842 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" podUID="d107aacb-3e12-43fd-a68c-2a6b2c10295c" Jan 21 21:27:13 crc kubenswrapper[4860]: E0121 21:27:13.899030 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" podUID="d107aacb-3e12-43fd-a68c-2a6b2c10295c" Jan 21 21:27:14 crc kubenswrapper[4860]: E0121 21:27:14.304350 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 21 21:27:14 crc kubenswrapper[4860]: E0121 21:27:14.305140 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhb5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-m892h_openstack-operators(9731b174-d203-4170-b49f-0de94000f154): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:14 crc kubenswrapper[4860]: E0121 21:27:14.307602 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" podUID="9731b174-d203-4170-b49f-0de94000f154" Jan 21 21:27:14 crc kubenswrapper[4860]: E0121 21:27:14.913152 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" podUID="9731b174-d203-4170-b49f-0de94000f154" Jan 21 21:27:15 crc kubenswrapper[4860]: E0121 21:27:15.997495 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 21:27:15 crc kubenswrapper[4860]: E0121 21:27:15.998302 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bg4wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-nbvmh_openstack-operators(a5eceab3-1171-484d-91da-990d323440d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:15 crc kubenswrapper[4860]: E0121 21:27:15.999580 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" podUID="a5eceab3-1171-484d-91da-990d323440d4" Jan 21 21:27:16 crc kubenswrapper[4860]: E0121 21:27:16.933412 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" podUID="a5eceab3-1171-484d-91da-990d323440d4" Jan 21 21:27:18 crc kubenswrapper[4860]: E0121 21:27:18.892221 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 21 21:27:18 crc kubenswrapper[4860]: E0121 21:27:18.892593 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2hs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-q8wm8_openstack-operators(adcb4b85-f016-45ed-8029-7191ade5683a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:18 crc kubenswrapper[4860]: E0121 21:27:18.893913 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" podUID="adcb4b85-f016-45ed-8029-7191ade5683a" Jan 21 21:27:19 crc kubenswrapper[4860]: E0121 21:27:19.044641 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" podUID="adcb4b85-f016-45ed-8029-7191ade5683a" Jan 21 21:27:20 crc kubenswrapper[4860]: E0121 21:27:20.713262 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 21:27:20 crc kubenswrapper[4860]: E0121 21:27:20.714061 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqvhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-pvq7t_openstack-operators(084bba8e-36e4-4e04-8109-4b0f6f97d37f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:20 crc kubenswrapper[4860]: E0121 21:27:20.715282 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" podUID="084bba8e-36e4-4e04-8109-4b0f6f97d37f" Jan 21 21:27:21 crc kubenswrapper[4860]: E0121 21:27:21.063210 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" podUID="084bba8e-36e4-4e04-8109-4b0f6f97d37f" Jan 21 21:27:27 crc kubenswrapper[4860]: I0121 21:27:27.507772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:27 crc kubenswrapper[4860]: I0121 21:27:27.519261 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854787gn\" (UID: \"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:27 crc kubenswrapper[4860]: I0121 21:27:27.769628 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:28 crc kubenswrapper[4860]: I0121 21:27:28.424202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:28 crc kubenswrapper[4860]: I0121 21:27:28.428719 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dad99b9-0de7-450d-8c58-96590671dd98-webhook-certs\") pod \"openstack-operator-controller-manager-6c98596b-6jfrl\" (UID: \"8dad99b9-0de7-450d-8c58-96590671dd98\") " pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:28 crc kubenswrapper[4860]: I0121 21:27:28.699915 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6b7wn" Jan 21 21:27:28 crc kubenswrapper[4860]: I0121 21:27:28.708067 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:31 crc kubenswrapper[4860]: E0121 21:27:31.212378 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 21 21:27:31 crc kubenswrapper[4860]: E0121 21:27:31.213039 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5t65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-4vpgf_openstack-operators(96503e13-4e73-4048-be57-01a726c114da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:31 crc kubenswrapper[4860]: E0121 21:27:31.214306 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" podUID="96503e13-4e73-4048-be57-01a726c114da" Jan 21 21:27:32 crc kubenswrapper[4860]: I0121 21:27:32.104435 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:27:32 crc kubenswrapper[4860]: I0121 21:27:32.104536 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:27:32 crc kubenswrapper[4860]: E0121 21:27:32.306167 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" podUID="96503e13-4e73-4048-be57-01a726c114da" Jan 21 21:27:32 crc kubenswrapper[4860]: E0121 21:27:32.776601 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 21 21:27:32 crc kubenswrapper[4860]: E0121 21:27:32.777992 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vf7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-nn25n_openstack-operators(69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:32 crc kubenswrapper[4860]: E0121 21:27:32.779221 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podUID="69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5" Jan 21 21:27:34 crc kubenswrapper[4860]: E0121 21:27:34.574877 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0" Jan 21 21:27:34 crc kubenswrapper[4860]: E0121 21:27:34.575332 4860 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0" Jan 21 21:27:34 crc kubenswrapper[4860]: E0121 21:27:34.575511 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pjtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-674d7f6576-jn79v_openstack-operators(38566005-2062-4d80-a44a-11976396a2aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:34 crc kubenswrapper[4860]: E0121 21:27:34.576862 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" Jan 21 21:27:35 crc kubenswrapper[4860]: E0121 21:27:35.089429 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 21:27:35 crc kubenswrapper[4860]: E0121 21:27:35.089968 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj755,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mpknx_openstack-operators(93010989-aa15-487c-b470-919932329af1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:27:35 crc kubenswrapper[4860]: E0121 21:27:35.092350 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podUID="93010989-aa15-487c-b470-919932329af1" Jan 21 21:27:35 crc kubenswrapper[4860]: I0121 21:27:35.696734 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p"] Jan 21 21:27:35 crc kubenswrapper[4860]: I0121 21:27:35.846291 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn"] Jan 21 21:27:35 crc kubenswrapper[4860]: I0121 21:27:35.855131 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl"] Jan 21 21:27:35 crc kubenswrapper[4860]: W0121 21:27:35.968598 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dad99b9_0de7_450d_8c58_96590671dd98.slice/crio-e33e58e1a2c49389038d652a1fd727dee68cba6670bfa3572696231e63693a64 WatchSource:0}: Error finding container e33e58e1a2c49389038d652a1fd727dee68cba6670bfa3572696231e63693a64: Status 404 returned error can't find the container with id e33e58e1a2c49389038d652a1fd727dee68cba6670bfa3572696231e63693a64 Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.332774 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" event={"ID":"9731b174-d203-4170-b49f-0de94000f154","Type":"ContainerStarted","Data":"e3ca19e3db9e488d5127f7f4109b46a20134d0e39fa7154e475d9c64510ad0cd"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.333634 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.338914 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" event={"ID":"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96","Type":"ContainerStarted","Data":"18babed703393ec62ede3a97915115ee50282a338572c6b7e5ef898dbc6ac01f"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.347643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" event={"ID":"626c3db6-f60f-472b-b0e5-0834b5bded25","Type":"ContainerStarted","Data":"50959f508244d96ce155cdf8e0295b1c404426bef732b91211b1e8d0c9b5a34c"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.348074 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.357319 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" podStartSLOduration=5.037952491 podStartE2EDuration="42.357281204s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.968903824 +0000 UTC m=+1110.191082294" lastFinishedPulling="2026-01-21 21:27:35.288232537 +0000 UTC m=+1147.510411007" observedRunningTime="2026-01-21 21:27:36.357086008 +0000 UTC m=+1148.579264488" watchObservedRunningTime="2026-01-21 21:27:36.357281204 +0000 UTC m=+1148.579459684" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.362340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" event={"ID":"61a273d5-b25c-4729-8736-9965ac435468","Type":"ContainerStarted","Data":"657dbecae59615f987f0f734366871dc6686ae7725c830133cf120f87f8ba66c"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.363253 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.379287 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" event={"ID":"404e97a3-3fcd-4ec0-a67d-53ed93d62685","Type":"ContainerStarted","Data":"58d43b86bbfc3192eb66b8d01ae7f6cecc533d7b4894cdb59984e606e972c0ea"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.380201 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.389234 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" podStartSLOduration=7.107159043 podStartE2EDuration="42.38920486s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.983518671 +0000 UTC m=+1110.205697131" lastFinishedPulling="2026-01-21 21:27:33.265564478 +0000 UTC m=+1145.487742948" observedRunningTime="2026-01-21 21:27:36.382519695 +0000 UTC m=+1148.604698155" watchObservedRunningTime="2026-01-21 21:27:36.38920486 +0000 UTC m=+1148.611383320" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.389924 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" event={"ID":"3d5ae9ad-1309-4221-b99a-86b9e5aa075b","Type":"ContainerStarted","Data":"0cca6d92bd0ea6b119cf614c330576634701620376b58a4f530720df23489f29"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.413634 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" event={"ID":"b4019683-a628-42e6-91ba-1cb0505326e3","Type":"ContainerStarted","Data":"1439bef6f674abb336198bc0e7e96c64be91ad31293f35dcabb7c211b2f5bbba"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.414653 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.426719 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" podStartSLOduration=4.3544677 podStartE2EDuration="41.426688394s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.110598003 +0000 UTC m=+1110.332776473" lastFinishedPulling="2026-01-21 21:27:35.182818697 +0000 UTC m=+1147.404997167" observedRunningTime="2026-01-21 21:27:36.423221308 +0000 UTC m=+1148.645399768" watchObservedRunningTime="2026-01-21 21:27:36.426688394 +0000 UTC m=+1148.648866864" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.427704 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" event={"ID":"2dd3e1b9-abea-4287-87e0-cb3f60423d54","Type":"ContainerStarted","Data":"5746f5e2187b09ee516c4283eec85867f8c94873338be3bccca0fb01a356cd43"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.428539 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.446230 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" event={"ID":"8dad99b9-0de7-450d-8c58-96590671dd98","Type":"ContainerStarted","Data":"e33e58e1a2c49389038d652a1fd727dee68cba6670bfa3572696231e63693a64"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.455942 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" event={"ID":"1a209a81-fb7b-4621-84db-567f96093a6b","Type":"ContainerStarted","Data":"0f4781b68dcca530cf8ae247e48602a8f577fcf51d2a63f0d35dd50e9bc7fc7f"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.456993 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.472028 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" event={"ID":"519cbf74-c4d7-425b-837d-afbb85f3ecc4","Type":"ContainerStarted","Data":"6e7cb3c15c5a7417c78904fa559e67956d8e93bc532c1dc2ad53d97946aad684"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.473111 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.477710 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" event={"ID":"f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85","Type":"ContainerStarted","Data":"84c4ae946e6158bd7694ac1ced87000717970e0c973257e371b15d3ba8a71aed"} Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.479514 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.548786 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" podStartSLOduration=6.238575849 podStartE2EDuration="42.548752344s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:56.461852967 +0000 UTC m=+1108.684031437" lastFinishedPulling="2026-01-21 21:27:32.772029462 +0000 UTC m=+1144.994207932" observedRunningTime="2026-01-21 21:27:36.542984897 +0000 UTC m=+1148.765163387" watchObservedRunningTime="2026-01-21 21:27:36.548752344 +0000 UTC m=+1148.770930814" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.592206 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" podStartSLOduration=5.092238948 podStartE2EDuration="41.592159099s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.983756298 +0000 UTC m=+1110.205934768" lastFinishedPulling="2026-01-21 21:27:34.483676439 +0000 UTC m=+1146.705854919" observedRunningTime="2026-01-21 21:27:36.589275981 +0000 UTC m=+1148.811454451" watchObservedRunningTime="2026-01-21 21:27:36.592159099 +0000 UTC m=+1148.814337589" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.680377 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" podStartSLOduration=4.585704246 podStartE2EDuration="42.680359654s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.174025502 +0000 UTC m=+1109.396203962" lastFinishedPulling="2026-01-21 21:27:35.26868089 +0000 UTC m=+1147.490859370" observedRunningTime="2026-01-21 21:27:36.676580788 +0000 UTC m=+1148.898759258" watchObservedRunningTime="2026-01-21 21:27:36.680359654 +0000 UTC m=+1148.902538124" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.852378 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" podStartSLOduration=7.522269772 podStartE2EDuration="42.852345987s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.935844014 +0000 UTC m=+1110.158022484" lastFinishedPulling="2026-01-21 21:27:33.265920229 +0000 UTC m=+1145.488098699" observedRunningTime="2026-01-21 21:27:36.829325444 +0000 UTC m=+1149.051503914" watchObservedRunningTime="2026-01-21 21:27:36.852345987 +0000 UTC m=+1149.074524457" Jan 21 21:27:36 crc kubenswrapper[4860]: I0121 21:27:36.853192 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" podStartSLOduration=6.4058127559999996 podStartE2EDuration="42.853185933s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:56.820488062 +0000 UTC m=+1109.042666522" lastFinishedPulling="2026-01-21 21:27:33.267861229 +0000 UTC m=+1145.490039699" observedRunningTime="2026-01-21 21:27:36.805861247 +0000 UTC m=+1149.028039707" watchObservedRunningTime="2026-01-21 21:27:36.853185933 +0000 UTC m=+1149.075364403" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.525980 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" event={"ID":"33a0c624-f40b-4d45-9b00-39c36c15d6bb","Type":"ContainerStarted","Data":"5e041147a70d29c5760adee0ace1e617b25b5e891cf14423919141a4c29f8899"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.527038 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.541716 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" event={"ID":"a5eceab3-1171-484d-91da-990d323440d4","Type":"ContainerStarted","Data":"891bb008d1cbf4bbd25679399d7d3362d00f28e6061cb67b957e85dd06272b13"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.542442 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.562906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" event={"ID":"adcb4b85-f016-45ed-8029-7191ade5683a","Type":"ContainerStarted","Data":"27dc9a43f45ecf91ea101b5ed42dce78c50e76e3b0227145a3f362eb97051d1b"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.563948 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.596817 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" podStartSLOduration=6.301738016 podStartE2EDuration="43.596798128s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.940319151 +0000 UTC m=+1110.162497621" lastFinishedPulling="2026-01-21 21:27:35.235379263 +0000 UTC m=+1147.457557733" observedRunningTime="2026-01-21 21:27:37.594792527 +0000 UTC m=+1149.816970987" watchObservedRunningTime="2026-01-21 21:27:37.596798128 +0000 UTC m=+1149.818976588" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.598791 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" podStartSLOduration=17.165197648 podStartE2EDuration="43.59878042s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:56.859413792 +0000 UTC m=+1109.081592262" lastFinishedPulling="2026-01-21 21:27:23.292996564 +0000 UTC m=+1135.515175034" observedRunningTime="2026-01-21 21:27:36.960993776 +0000 UTC m=+1149.183172236" watchObservedRunningTime="2026-01-21 21:27:37.59878042 +0000 UTC m=+1149.820958890" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.605302 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" event={"ID":"3f367ab5-2df3-466b-8ec4-7c4f23dcc578","Type":"ContainerStarted","Data":"8d0d248694dd54b1ed651f848678d400da94763867bc15169c75063729f6f0c7"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.605666 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.650608 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" event={"ID":"084bba8e-36e4-4e04-8109-4b0f6f97d37f","Type":"ContainerStarted","Data":"a47a5b1cf421bcc222c37e7aa7ca7ce1d192e7a5503538f7c1427fc0c6ddc3c5"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.651974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.679054 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" event={"ID":"8dad99b9-0de7-450d-8c58-96590671dd98","Type":"ContainerStarted","Data":"0475983f63d1c67de1059ec434b0094f5a3a30d7fea1418d2cd58eea58c0ce65"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.679520 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.759731 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" podStartSLOduration=6.5729753429999995 podStartE2EDuration="43.759710385s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.048594909 +0000 UTC m=+1110.270773379" lastFinishedPulling="2026-01-21 21:27:35.235329951 +0000 UTC m=+1147.457508421" observedRunningTime="2026-01-21 21:27:37.758829738 +0000 UTC m=+1149.981008208" watchObservedRunningTime="2026-01-21 21:27:37.759710385 +0000 UTC m=+1149.981888855" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.957870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" event={"ID":"d107aacb-3e12-43fd-a68c-2a6b2c10295c","Type":"ContainerStarted","Data":"dae1e0783d7666df6ebb51b6d32ac2f768854d9f2a99cbf7ca280d9e822b9758"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.958681 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.971978 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" event={"ID":"4f7ce297-eef0-4067-bd7b-1bb64ced0239","Type":"ContainerStarted","Data":"e899754b24ef2a169c08ae942792f2d5a06fbf3684144f5807f4ccb4b3221676"} Jan 21 21:27:37 crc kubenswrapper[4860]: I0121 21:27:37.972513 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:37.985109 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" podStartSLOduration=6.767523556 podStartE2EDuration="43.98508368s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.049655491 +0000 UTC m=+1110.271833961" lastFinishedPulling="2026-01-21 21:27:35.267215615 +0000 UTC m=+1147.489394085" observedRunningTime="2026-01-21 21:27:37.983715698 +0000 UTC m=+1150.205894188" watchObservedRunningTime="2026-01-21 21:27:37.98508368 +0000 UTC m=+1150.207262150" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:38.458414 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" podStartSLOduration=7.154644051 podStartE2EDuration="44.458394848s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.944608912 +0000 UTC m=+1110.166787382" lastFinishedPulling="2026-01-21 21:27:35.248359709 +0000 UTC m=+1147.470538179" observedRunningTime="2026-01-21 21:27:38.418406807 +0000 UTC m=+1150.640585287" watchObservedRunningTime="2026-01-21 21:27:38.458394848 +0000 UTC m=+1150.680573318" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:38.483431 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" podStartSLOduration=7.189233088 podStartE2EDuration="44.483410533s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.968552913 +0000 UTC m=+1110.190731383" lastFinishedPulling="2026-01-21 21:27:35.262730358 +0000 UTC m=+1147.484908828" observedRunningTime="2026-01-21 21:27:38.482607018 +0000 UTC m=+1150.704785488" watchObservedRunningTime="2026-01-21 21:27:38.483410533 +0000 UTC m=+1150.705589003" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:38.544036 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" podStartSLOduration=43.544010644 podStartE2EDuration="43.544010644s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:27:38.530280855 +0000 UTC m=+1150.752459325" watchObservedRunningTime="2026-01-21 21:27:38.544010644 +0000 UTC m=+1150.766189124" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:38.572537 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" podStartSLOduration=6.734704023 podStartE2EDuration="44.572512105s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.431420215 +0000 UTC m=+1109.653598685" lastFinishedPulling="2026-01-21 21:27:35.269228297 +0000 UTC m=+1147.491406767" observedRunningTime="2026-01-21 21:27:38.56320806 +0000 UTC m=+1150.785386530" watchObservedRunningTime="2026-01-21 21:27:38.572512105 +0000 UTC m=+1150.794690575" Jan 21 21:27:38 crc kubenswrapper[4860]: I0121 21:27:38.611465 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" podStartSLOduration=6.522270242 podStartE2EDuration="43.611437444s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.093645335 +0000 UTC m=+1110.315823805" lastFinishedPulling="2026-01-21 21:27:35.182812517 +0000 UTC m=+1147.404991007" observedRunningTime="2026-01-21 21:27:38.601104369 +0000 UTC m=+1150.823282839" watchObservedRunningTime="2026-01-21 21:27:38.611437444 +0000 UTC m=+1150.833615914" Jan 21 21:27:44 crc kubenswrapper[4860]: I0121 21:27:44.824531 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c95ps" Jan 21 21:27:44 crc kubenswrapper[4860]: I0121 21:27:44.831803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sslzp" Jan 21 21:27:44 crc kubenswrapper[4860]: I0121 21:27:44.953095 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-vrvmq" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.054106 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b29tb" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.113750 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pvq7t" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.197874 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-p7jg2" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.299803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-ldzzc" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.345608 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-w6jg6" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.402865 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w857v" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.565233 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-8mv6c" Jan 21 21:27:45 crc kubenswrapper[4860]: E0121 21:27:45.581348 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podUID="69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.803878 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q8wm8" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.891688 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-nbvmh" Jan 21 21:27:45 crc kubenswrapper[4860]: I0121 21:27:45.958666 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-m892h" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.201889 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" event={"ID":"96503e13-4e73-4048-be57-01a726c114da","Type":"ContainerStarted","Data":"1dad2625acafc617c323711b4e4e7e58e19a9621bc46675ab68fbe10d8b21769"} Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.203281 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.207863 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" event={"ID":"95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96","Type":"ContainerStarted","Data":"ecf63b20b8459927cbeebfe3534b7b5a5582c8fe8da410c612285c2a77ef12d4"} Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.208053 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.209146 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" event={"ID":"3d5ae9ad-1309-4221-b99a-86b9e5aa075b","Type":"ContainerStarted","Data":"c5ba7085ef00bd206703dbf5a5f4d183f0438c51f91650907f7d95ece26323ef"} Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.209578 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.235876 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" podStartSLOduration=4.834615509 podStartE2EDuration="52.235850473s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:57.944287272 +0000 UTC m=+1110.166465742" lastFinishedPulling="2026-01-21 21:27:45.345522236 +0000 UTC m=+1157.567700706" observedRunningTime="2026-01-21 21:27:46.220576537 +0000 UTC m=+1158.442755027" watchObservedRunningTime="2026-01-21 21:27:46.235850473 +0000 UTC m=+1158.458028943" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.251099 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" podStartSLOduration=42.618739711 podStartE2EDuration="52.251067248s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:27:35.713144177 +0000 UTC m=+1147.935322647" lastFinishedPulling="2026-01-21 21:27:45.345471714 +0000 UTC m=+1157.567650184" observedRunningTime="2026-01-21 21:27:46.246561521 +0000 UTC m=+1158.468739991" watchObservedRunningTime="2026-01-21 21:27:46.251067248 +0000 UTC m=+1158.473245738" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.278987 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" podStartSLOduration=42.864791427 podStartE2EDuration="52.27895352s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:27:35.937673776 +0000 UTC m=+1148.159852246" lastFinishedPulling="2026-01-21 21:27:45.351835869 +0000 UTC m=+1157.574014339" observedRunningTime="2026-01-21 21:27:46.273086061 +0000 UTC m=+1158.495264551" watchObservedRunningTime="2026-01-21 21:27:46.27895352 +0000 UTC m=+1158.501132000" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.384000 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pv9x9" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.494805 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-tldvn" Jan 21 21:27:46 crc kubenswrapper[4860]: I0121 21:27:46.517752 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bk9sb" Jan 21 21:27:46 crc kubenswrapper[4860]: E0121 21:27:46.581878 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podUID="93010989-aa15-487c-b470-919932329af1" Jan 21 21:27:48 crc kubenswrapper[4860]: I0121 21:27:48.713458 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c98596b-6jfrl" Jan 21 21:27:49 crc kubenswrapper[4860]: E0121 21:27:49.581403 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/openstack-k8s-operators/watcher-operator:8f89cebcdb83b244613d84873d84cfe705f618b0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" Jan 21 21:27:50 crc kubenswrapper[4860]: I0121 21:27:50.867471 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-8hx7p" Jan 21 21:27:55 crc kubenswrapper[4860]: I0121 21:27:55.369407 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4vpgf" Jan 21 21:27:57 crc kubenswrapper[4860]: I0121 21:27:57.777734 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854787gn" Jan 21 21:28:00 crc kubenswrapper[4860]: I0121 21:28:00.341469 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" event={"ID":"69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5","Type":"ContainerStarted","Data":"43b6afc783c5e733567fe9f1a03e13df36a836fa4368496277563160d49ed147"} Jan 21 21:28:00 crc kubenswrapper[4860]: I0121 21:28:00.342303 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:28:00 crc kubenswrapper[4860]: I0121 21:28:00.370064 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" podStartSLOduration=4.410861498 podStartE2EDuration="1m6.369984553s" podCreationTimestamp="2026-01-21 21:26:54 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.091609582 +0000 UTC m=+1110.313788052" lastFinishedPulling="2026-01-21 21:28:00.050732637 +0000 UTC m=+1172.272911107" observedRunningTime="2026-01-21 21:28:00.361165202 +0000 UTC m=+1172.583343682" watchObservedRunningTime="2026-01-21 21:28:00.369984553 +0000 UTC m=+1172.592163013" Jan 21 21:28:01 crc kubenswrapper[4860]: I0121 21:28:01.480040 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" event={"ID":"93010989-aa15-487c-b470-919932329af1","Type":"ContainerStarted","Data":"ca01ea36d4e38d042efd9c6a2c5402442ecd1f3dd6da4c4589c10c70bf800a41"} Jan 21 21:28:01 crc kubenswrapper[4860]: I0121 21:28:01.529297 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpknx" podStartSLOduration=3.623039665 podStartE2EDuration="1m6.529265129s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.09379522 +0000 UTC m=+1110.315973690" lastFinishedPulling="2026-01-21 21:28:01.000020674 +0000 UTC m=+1173.222199154" observedRunningTime="2026-01-21 21:28:01.503052266 +0000 UTC m=+1173.725230736" watchObservedRunningTime="2026-01-21 21:28:01.529265129 +0000 UTC m=+1173.751443599" Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.104161 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.104247 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.104312 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.105207 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.105301 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887" gracePeriod=600 Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.492260 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887" exitCode=0 Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.492329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887"} Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.492790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6"} Jan 21 21:28:02 crc kubenswrapper[4860]: I0121 21:28:02.492847 4860 scope.go:117] "RemoveContainer" containerID="6450f5e048fd300a5315e1af026d3a0f05cce9ec9913389ebdc890cf54d0c51e" Jan 21 21:28:04 crc kubenswrapper[4860]: I0121 21:28:04.514122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" event={"ID":"38566005-2062-4d80-a44a-11976396a2aa","Type":"ContainerStarted","Data":"49f77acde81e40177fd99669a47889a0d09d86fcf259e61dbaecffb400f25edd"} Jan 21 21:28:04 crc kubenswrapper[4860]: I0121 21:28:04.515465 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:28:04 crc kubenswrapper[4860]: I0121 21:28:04.542303 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podStartSLOduration=3.99429095 podStartE2EDuration="1m9.542264523s" podCreationTimestamp="2026-01-21 21:26:55 +0000 UTC" firstStartedPulling="2026-01-21 21:26:58.104042852 +0000 UTC m=+1110.326221322" lastFinishedPulling="2026-01-21 21:28:03.652016425 +0000 UTC m=+1175.874194895" observedRunningTime="2026-01-21 21:28:04.536814496 +0000 UTC m=+1176.758992976" watchObservedRunningTime="2026-01-21 21:28:04.542264523 +0000 UTC m=+1176.764443003" Jan 21 21:28:05 crc kubenswrapper[4860]: I0121 21:28:05.679259 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-nn25n" Jan 21 21:28:16 crc kubenswrapper[4860]: I0121 21:28:16.542611 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.205595 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.209094 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" podUID="38566005-2062-4d80-a44a-11976396a2aa" containerName="manager" containerID="cri-o://49f77acde81e40177fd99669a47889a0d09d86fcf259e61dbaecffb400f25edd" gracePeriod=10 Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.284052 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.284376 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" podUID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" containerName="operator" containerID="cri-o://fc5ed30b24d0bedfe847d4125b01b52dec47b3697f436189073226e792a27f1e" gracePeriod=10 Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.693433 4860 generic.go:334] "Generic (PLEG): container finished" podID="38566005-2062-4d80-a44a-11976396a2aa" containerID="49f77acde81e40177fd99669a47889a0d09d86fcf259e61dbaecffb400f25edd" exitCode=0 Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.693544 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" event={"ID":"38566005-2062-4d80-a44a-11976396a2aa","Type":"ContainerDied","Data":"49f77acde81e40177fd99669a47889a0d09d86fcf259e61dbaecffb400f25edd"} Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.694150 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" event={"ID":"38566005-2062-4d80-a44a-11976396a2aa","Type":"ContainerDied","Data":"88f4a2f2c3364d299f52b2e0b308e533be07d1f660a12c9fbd851007fdafe3f3"} Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.694186 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f4a2f2c3364d299f52b2e0b308e533be07d1f660a12c9fbd851007fdafe3f3" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.696911 4860 generic.go:334] "Generic (PLEG): container finished" podID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" containerID="fc5ed30b24d0bedfe847d4125b01b52dec47b3697f436189073226e792a27f1e" exitCode=0 Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.696980 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" event={"ID":"00e7e600-d3e0-4dc7-9b65-48c39d9c2938","Type":"ContainerDied","Data":"fc5ed30b24d0bedfe847d4125b01b52dec47b3697f436189073226e792a27f1e"} Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.734586 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.817642 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.877085 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pjtd\" (UniqueName: \"kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd\") pod \"38566005-2062-4d80-a44a-11976396a2aa\" (UID: \"38566005-2062-4d80-a44a-11976396a2aa\") " Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.884065 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd" (OuterVolumeSpecName: "kube-api-access-6pjtd") pod "38566005-2062-4d80-a44a-11976396a2aa" (UID: "38566005-2062-4d80-a44a-11976396a2aa"). InnerVolumeSpecName "kube-api-access-6pjtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.979106 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wprg5\" (UniqueName: \"kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5\") pod \"00e7e600-d3e0-4dc7-9b65-48c39d9c2938\" (UID: \"00e7e600-d3e0-4dc7-9b65-48c39d9c2938\") " Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.979501 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pjtd\" (UniqueName: \"kubernetes.io/projected/38566005-2062-4d80-a44a-11976396a2aa-kube-api-access-6pjtd\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:22 crc kubenswrapper[4860]: I0121 21:28:22.983635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5" (OuterVolumeSpecName: "kube-api-access-wprg5") pod "00e7e600-d3e0-4dc7-9b65-48c39d9c2938" (UID: "00e7e600-d3e0-4dc7-9b65-48c39d9c2938"). InnerVolumeSpecName "kube-api-access-wprg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.081366 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wprg5\" (UniqueName: \"kubernetes.io/projected/00e7e600-d3e0-4dc7-9b65-48c39d9c2938-kube-api-access-wprg5\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.732843 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v" Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.732843 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.732868 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-657d864869-q6v9p" event={"ID":"00e7e600-d3e0-4dc7-9b65-48c39d9c2938","Type":"ContainerDied","Data":"eba75e3a2ab20b88298fa76d026f1e9b02b5a484404d0e8a441379aa28efbf5a"} Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.733635 4860 scope.go:117] "RemoveContainer" containerID="fc5ed30b24d0bedfe847d4125b01b52dec47b3697f436189073226e792a27f1e" Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.802743 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.808703 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-674d7f6576-jn79v"] Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.835034 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:28:23 crc kubenswrapper[4860]: I0121 21:28:23.867819 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-657d864869-q6v9p"] Jan 21 21:28:24 crc kubenswrapper[4860]: I0121 21:28:24.590247 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" path="/var/lib/kubelet/pods/00e7e600-d3e0-4dc7-9b65-48c39d9c2938/volumes" Jan 21 21:28:24 crc kubenswrapper[4860]: I0121 21:28:24.590872 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38566005-2062-4d80-a44a-11976396a2aa" path="/var/lib/kubelet/pods/38566005-2062-4d80-a44a-11976396a2aa/volumes" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.215571 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:27 crc kubenswrapper[4860]: E0121 21:28:27.216489 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38566005-2062-4d80-a44a-11976396a2aa" containerName="manager" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.216509 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="38566005-2062-4d80-a44a-11976396a2aa" containerName="manager" Jan 21 21:28:27 crc kubenswrapper[4860]: E0121 21:28:27.216546 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" containerName="operator" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.216553 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" containerName="operator" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.216768 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="38566005-2062-4d80-a44a-11976396a2aa" containerName="manager" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.216789 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e7e600-d3e0-4dc7-9b65-48c39d9c2938" containerName="operator" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.217424 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.220274 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-q8tkl" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.227165 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.406394 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvkx\" (UniqueName: \"kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx\") pod \"watcher-operator-index-4cjtk\" (UID: \"5366e918-1bda-4ea8-a5e1-a979b86c99ec\") " pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.509512 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgvkx\" (UniqueName: \"kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx\") pod \"watcher-operator-index-4cjtk\" (UID: \"5366e918-1bda-4ea8-a5e1-a979b86c99ec\") " pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.538069 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgvkx\" (UniqueName: \"kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx\") pod \"watcher-operator-index-4cjtk\" (UID: \"5366e918-1bda-4ea8-a5e1-a979b86c99ec\") " pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:27 crc kubenswrapper[4860]: I0121 21:28:27.584058 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:28 crc kubenswrapper[4860]: I0121 21:28:28.125090 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:28 crc kubenswrapper[4860]: I0121 21:28:28.800249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-4cjtk" event={"ID":"5366e918-1bda-4ea8-a5e1-a979b86c99ec","Type":"ContainerStarted","Data":"88edf29c1354003d21432669af18005f7faa3fdee3d43bb4a15694476a0dffc7"} Jan 21 21:28:28 crc kubenswrapper[4860]: I0121 21:28:28.800824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-4cjtk" event={"ID":"5366e918-1bda-4ea8-a5e1-a979b86c99ec","Type":"ContainerStarted","Data":"8d4202fb8ca62248be84b3e57f789a8695918629668d4c15e743f6bc217b7d4a"} Jan 21 21:28:28 crc kubenswrapper[4860]: I0121 21:28:28.834749 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-4cjtk" podStartSLOduration=1.455240237 podStartE2EDuration="1.834689435s" podCreationTimestamp="2026-01-21 21:28:27 +0000 UTC" firstStartedPulling="2026-01-21 21:28:28.138328104 +0000 UTC m=+1200.360506574" lastFinishedPulling="2026-01-21 21:28:28.517777302 +0000 UTC m=+1200.739955772" observedRunningTime="2026-01-21 21:28:28.8273293 +0000 UTC m=+1201.049507790" watchObservedRunningTime="2026-01-21 21:28:28.834689435 +0000 UTC m=+1201.056867895" Jan 21 21:28:31 crc kubenswrapper[4860]: I0121 21:28:31.609708 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:31 crc kubenswrapper[4860]: I0121 21:28:31.611695 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-4cjtk" podUID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" containerName="registry-server" containerID="cri-o://88edf29c1354003d21432669af18005f7faa3fdee3d43bb4a15694476a0dffc7" gracePeriod=2 Jan 21 21:28:31 crc kubenswrapper[4860]: I0121 21:28:31.829704 4860 generic.go:334] "Generic (PLEG): container finished" podID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" containerID="88edf29c1354003d21432669af18005f7faa3fdee3d43bb4a15694476a0dffc7" exitCode=0 Jan 21 21:28:31 crc kubenswrapper[4860]: I0121 21:28:31.830009 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-4cjtk" event={"ID":"5366e918-1bda-4ea8-a5e1-a979b86c99ec","Type":"ContainerDied","Data":"88edf29c1354003d21432669af18005f7faa3fdee3d43bb4a15694476a0dffc7"} Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.088928 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.213425 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-8w757"] Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.213834 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgvkx\" (UniqueName: \"kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx\") pod \"5366e918-1bda-4ea8-a5e1-a979b86c99ec\" (UID: \"5366e918-1bda-4ea8-a5e1-a979b86c99ec\") " Jan 21 21:28:32 crc kubenswrapper[4860]: E0121 21:28:32.214043 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" containerName="registry-server" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.214080 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" containerName="registry-server" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.214590 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" containerName="registry-server" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.216430 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.221079 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx" (OuterVolumeSpecName: "kube-api-access-lgvkx") pod "5366e918-1bda-4ea8-a5e1-a979b86c99ec" (UID: "5366e918-1bda-4ea8-a5e1-a979b86c99ec"). InnerVolumeSpecName "kube-api-access-lgvkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.239377 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-8w757"] Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.316613 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgvkx\" (UniqueName: \"kubernetes.io/projected/5366e918-1bda-4ea8-a5e1-a979b86c99ec-kube-api-access-lgvkx\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.418002 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b8x\" (UniqueName: \"kubernetes.io/projected/bdbebf1c-8bd6-4223-939a-f088d773cdc5-kube-api-access-f2b8x\") pod \"watcher-operator-index-8w757\" (UID: \"bdbebf1c-8bd6-4223-939a-f088d773cdc5\") " pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.520137 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2b8x\" (UniqueName: \"kubernetes.io/projected/bdbebf1c-8bd6-4223-939a-f088d773cdc5-kube-api-access-f2b8x\") pod \"watcher-operator-index-8w757\" (UID: \"bdbebf1c-8bd6-4223-939a-f088d773cdc5\") " pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.539485 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2b8x\" (UniqueName: \"kubernetes.io/projected/bdbebf1c-8bd6-4223-939a-f088d773cdc5-kube-api-access-f2b8x\") pod \"watcher-operator-index-8w757\" (UID: \"bdbebf1c-8bd6-4223-939a-f088d773cdc5\") " pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.561133 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.845345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-4cjtk" event={"ID":"5366e918-1bda-4ea8-a5e1-a979b86c99ec","Type":"ContainerDied","Data":"8d4202fb8ca62248be84b3e57f789a8695918629668d4c15e743f6bc217b7d4a"} Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.845448 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-4cjtk" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.846719 4860 scope.go:117] "RemoveContainer" containerID="88edf29c1354003d21432669af18005f7faa3fdee3d43bb4a15694476a0dffc7" Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.877330 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:32 crc kubenswrapper[4860]: I0121 21:28:32.886954 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-4cjtk"] Jan 21 21:28:33 crc kubenswrapper[4860]: I0121 21:28:33.048282 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-8w757"] Jan 21 21:28:33 crc kubenswrapper[4860]: W0121 21:28:33.061001 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdbebf1c_8bd6_4223_939a_f088d773cdc5.slice/crio-a3db0b483c579328190fd5b1300a871ca7732c73283acd68f15f5fdb358262e0 WatchSource:0}: Error finding container a3db0b483c579328190fd5b1300a871ca7732c73283acd68f15f5fdb358262e0: Status 404 returned error can't find the container with id a3db0b483c579328190fd5b1300a871ca7732c73283acd68f15f5fdb358262e0 Jan 21 21:28:33 crc kubenswrapper[4860]: I0121 21:28:33.859165 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-8w757" event={"ID":"bdbebf1c-8bd6-4223-939a-f088d773cdc5","Type":"ContainerStarted","Data":"98d5f64a6db5d791b49136659069e7e0f3577664dbef60d2370a0fc979d2353a"} Jan 21 21:28:33 crc kubenswrapper[4860]: I0121 21:28:33.859217 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-8w757" event={"ID":"bdbebf1c-8bd6-4223-939a-f088d773cdc5","Type":"ContainerStarted","Data":"a3db0b483c579328190fd5b1300a871ca7732c73283acd68f15f5fdb358262e0"} Jan 21 21:28:33 crc kubenswrapper[4860]: I0121 21:28:33.880036 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-8w757" podStartSLOduration=1.757770635 podStartE2EDuration="1.880016879s" podCreationTimestamp="2026-01-21 21:28:32 +0000 UTC" firstStartedPulling="2026-01-21 21:28:33.064197028 +0000 UTC m=+1205.286375518" lastFinishedPulling="2026-01-21 21:28:33.186443302 +0000 UTC m=+1205.408621762" observedRunningTime="2026-01-21 21:28:33.877969246 +0000 UTC m=+1206.100147736" watchObservedRunningTime="2026-01-21 21:28:33.880016879 +0000 UTC m=+1206.102195349" Jan 21 21:28:34 crc kubenswrapper[4860]: I0121 21:28:34.593112 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5366e918-1bda-4ea8-a5e1-a979b86c99ec" path="/var/lib/kubelet/pods/5366e918-1bda-4ea8-a5e1-a979b86c99ec/volumes" Jan 21 21:28:42 crc kubenswrapper[4860]: I0121 21:28:42.561540 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:42 crc kubenswrapper[4860]: I0121 21:28:42.562344 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:42 crc kubenswrapper[4860]: I0121 21:28:42.611467 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:42 crc kubenswrapper[4860]: I0121 21:28:42.976886 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-8w757" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.256871 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278"] Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.259183 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.263323 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xhwmg" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.270363 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278"] Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.408631 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvn4\" (UniqueName: \"kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.408697 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.409080 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.510578 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.510693 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvn4\" (UniqueName: \"kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.510735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.511472 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.511466 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.531823 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvn4\" (UniqueName: \"kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4\") pod \"621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:46 crc kubenswrapper[4860]: I0121 21:28:46.578241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:47 crc kubenswrapper[4860]: I0121 21:28:47.062854 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278"] Jan 21 21:28:47 crc kubenswrapper[4860]: I0121 21:28:47.985750 4860 generic.go:334] "Generic (PLEG): container finished" podID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerID="69b8857726cfdd087724b3716877356fde8381924689ddb33d486c34f0ac574e" exitCode=0 Jan 21 21:28:47 crc kubenswrapper[4860]: I0121 21:28:47.986088 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerDied","Data":"69b8857726cfdd087724b3716877356fde8381924689ddb33d486c34f0ac574e"} Jan 21 21:28:47 crc kubenswrapper[4860]: I0121 21:28:47.986128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerStarted","Data":"4981c607501306f33e96f29c4019d606c1982d2b858929379a0afc6e47156655"} Jan 21 21:28:50 crc kubenswrapper[4860]: I0121 21:28:50.017914 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerStarted","Data":"1453c4605b62e5088c07775a7f559493e385c41d0e52a7e3557e9b65953a310c"} Jan 21 21:28:51 crc kubenswrapper[4860]: I0121 21:28:51.031493 4860 generic.go:334] "Generic (PLEG): container finished" podID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerID="1453c4605b62e5088c07775a7f559493e385c41d0e52a7e3557e9b65953a310c" exitCode=0 Jan 21 21:28:51 crc kubenswrapper[4860]: I0121 21:28:51.031601 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerDied","Data":"1453c4605b62e5088c07775a7f559493e385c41d0e52a7e3557e9b65953a310c"} Jan 21 21:28:52 crc kubenswrapper[4860]: I0121 21:28:52.042810 4860 generic.go:334] "Generic (PLEG): container finished" podID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerID="ce3be78bb0423ae40a0401ee9144c84017b66ae27c8620cf2118788063f54a04" exitCode=0 Jan 21 21:28:52 crc kubenswrapper[4860]: I0121 21:28:52.042922 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerDied","Data":"ce3be78bb0423ae40a0401ee9144c84017b66ae27c8620cf2118788063f54a04"} Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.357248 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.491776 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jvn4\" (UniqueName: \"kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4\") pod \"4882d6a4-5a1e-446f-aba5-22af497454ef\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.492136 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util\") pod \"4882d6a4-5a1e-446f-aba5-22af497454ef\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.492197 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle\") pod \"4882d6a4-5a1e-446f-aba5-22af497454ef\" (UID: \"4882d6a4-5a1e-446f-aba5-22af497454ef\") " Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.493634 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle" (OuterVolumeSpecName: "bundle") pod "4882d6a4-5a1e-446f-aba5-22af497454ef" (UID: "4882d6a4-5a1e-446f-aba5-22af497454ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.510286 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4" (OuterVolumeSpecName: "kube-api-access-6jvn4") pod "4882d6a4-5a1e-446f-aba5-22af497454ef" (UID: "4882d6a4-5a1e-446f-aba5-22af497454ef"). InnerVolumeSpecName "kube-api-access-6jvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.516101 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util" (OuterVolumeSpecName: "util") pod "4882d6a4-5a1e-446f-aba5-22af497454ef" (UID: "4882d6a4-5a1e-446f-aba5-22af497454ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.593522 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jvn4\" (UniqueName: \"kubernetes.io/projected/4882d6a4-5a1e-446f-aba5-22af497454ef-kube-api-access-6jvn4\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.593562 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-util\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:53 crc kubenswrapper[4860]: I0121 21:28:53.593574 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4882d6a4-5a1e-446f-aba5-22af497454ef-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:28:54 crc kubenswrapper[4860]: I0121 21:28:54.063715 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" event={"ID":"4882d6a4-5a1e-446f-aba5-22af497454ef","Type":"ContainerDied","Data":"4981c607501306f33e96f29c4019d606c1982d2b858929379a0afc6e47156655"} Jan 21 21:28:54 crc kubenswrapper[4860]: I0121 21:28:54.063817 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4981c607501306f33e96f29c4019d606c1982d2b858929379a0afc6e47156655" Jan 21 21:28:54 crc kubenswrapper[4860]: I0121 21:28:54.063974 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.207122 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:28:59 crc kubenswrapper[4860]: E0121 21:28:59.209452 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="pull" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.209500 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="pull" Jan 21 21:28:59 crc kubenswrapper[4860]: E0121 21:28:59.209546 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="extract" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.209555 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="extract" Jan 21 21:28:59 crc kubenswrapper[4860]: E0121 21:28:59.209585 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="util" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.209594 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="util" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.210271 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4882d6a4-5a1e-446f-aba5-22af497454ef" containerName="extract" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.216848 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.220489 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.220771 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8m984" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.250152 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.310339 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.310430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftx4j\" (UniqueName: \"kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.310574 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.413177 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.413250 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftx4j\" (UniqueName: \"kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.413303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.422170 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.427429 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.437063 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftx4j\" (UniqueName: \"kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j\") pod \"watcher-operator-controller-manager-9bbf7b7d-cpjrp\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:28:59 crc kubenswrapper[4860]: I0121 21:28:59.549948 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:29:00 crc kubenswrapper[4860]: I0121 21:29:00.143802 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:29:00 crc kubenswrapper[4860]: I0121 21:29:00.174809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" event={"ID":"2fb0f70a-52fa-4058-ba4d-e824f51de4e5","Type":"ContainerStarted","Data":"63a31e2263eaa01db0586e3c629edce2a46e0bd9e5b22dbdb31fe6435ad4e0b1"} Jan 21 21:29:01 crc kubenswrapper[4860]: I0121 21:29:01.184735 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" event={"ID":"2fb0f70a-52fa-4058-ba4d-e824f51de4e5","Type":"ContainerStarted","Data":"6b7030f0a95275a40413c5075aa65d26c28f851a4ee3fa231e7fdc8ff55f1f31"} Jan 21 21:29:01 crc kubenswrapper[4860]: I0121 21:29:01.184841 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:29:01 crc kubenswrapper[4860]: I0121 21:29:01.206026 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" podStartSLOduration=2.206008243 podStartE2EDuration="2.206008243s" podCreationTimestamp="2026-01-21 21:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:29:01.202561907 +0000 UTC m=+1233.424740377" watchObservedRunningTime="2026-01-21 21:29:01.206008243 +0000 UTC m=+1233.428186713" Jan 21 21:29:09 crc kubenswrapper[4860]: I0121 21:29:09.556324 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.705249 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p"] Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.707229 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.735809 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p"] Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.823036 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-apiservice-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.823160 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssg5\" (UniqueName: \"kubernetes.io/projected/84bd609c-f081-46a8-80ba-9c251389699e-kube-api-access-sssg5\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.823424 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-webhook-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.925076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-webhook-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.925165 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-apiservice-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.925202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sssg5\" (UniqueName: \"kubernetes.io/projected/84bd609c-f081-46a8-80ba-9c251389699e-kube-api-access-sssg5\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.931537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-webhook-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.932063 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84bd609c-f081-46a8-80ba-9c251389699e-apiservice-cert\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:10 crc kubenswrapper[4860]: I0121 21:29:10.942924 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sssg5\" (UniqueName: \"kubernetes.io/projected/84bd609c-f081-46a8-80ba-9c251389699e-kube-api-access-sssg5\") pod \"watcher-operator-controller-manager-844f9d4c74-gwp5p\" (UID: \"84bd609c-f081-46a8-80ba-9c251389699e\") " pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:11 crc kubenswrapper[4860]: I0121 21:29:11.048366 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:11 crc kubenswrapper[4860]: I0121 21:29:11.579614 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p"] Jan 21 21:29:12 crc kubenswrapper[4860]: I0121 21:29:12.294093 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" event={"ID":"84bd609c-f081-46a8-80ba-9c251389699e","Type":"ContainerStarted","Data":"02b2de51bdaf904188461f94b66e95ceecb4367b6cd7993f229a92ddae3cf447"} Jan 21 21:29:12 crc kubenswrapper[4860]: I0121 21:29:12.294166 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" event={"ID":"84bd609c-f081-46a8-80ba-9c251389699e","Type":"ContainerStarted","Data":"6eb563ee98540cec5af5978d284dd5009fd7f3c6c5f720c1232958d89a0de767"} Jan 21 21:29:12 crc kubenswrapper[4860]: I0121 21:29:12.294371 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:12 crc kubenswrapper[4860]: I0121 21:29:12.321716 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" podStartSLOduration=2.321687262 podStartE2EDuration="2.321687262s" podCreationTimestamp="2026-01-21 21:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:29:12.315056979 +0000 UTC m=+1244.537235489" watchObservedRunningTime="2026-01-21 21:29:12.321687262 +0000 UTC m=+1244.543865722" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.058041 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-844f9d4c74-gwp5p" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.191355 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.191926 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" podUID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" containerName="manager" containerID="cri-o://6b7030f0a95275a40413c5075aa65d26c28f851a4ee3fa231e7fdc8ff55f1f31" gracePeriod=10 Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.380748 4860 generic.go:334] "Generic (PLEG): container finished" podID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" containerID="6b7030f0a95275a40413c5075aa65d26c28f851a4ee3fa231e7fdc8ff55f1f31" exitCode=0 Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.380824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" event={"ID":"2fb0f70a-52fa-4058-ba4d-e824f51de4e5","Type":"ContainerDied","Data":"6b7030f0a95275a40413c5075aa65d26c28f851a4ee3fa231e7fdc8ff55f1f31"} Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.668676 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.768289 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert\") pod \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.768588 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert\") pod \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.768662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftx4j\" (UniqueName: \"kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j\") pod \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\" (UID: \"2fb0f70a-52fa-4058-ba4d-e824f51de4e5\") " Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.777170 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2fb0f70a-52fa-4058-ba4d-e824f51de4e5" (UID: "2fb0f70a-52fa-4058-ba4d-e824f51de4e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.778120 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "2fb0f70a-52fa-4058-ba4d-e824f51de4e5" (UID: "2fb0f70a-52fa-4058-ba4d-e824f51de4e5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.783324 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j" (OuterVolumeSpecName: "kube-api-access-ftx4j") pod "2fb0f70a-52fa-4058-ba4d-e824f51de4e5" (UID: "2fb0f70a-52fa-4058-ba4d-e824f51de4e5"). InnerVolumeSpecName "kube-api-access-ftx4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.872783 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.872861 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftx4j\" (UniqueName: \"kubernetes.io/projected/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-kube-api-access-ftx4j\") on node \"crc\" DevicePath \"\"" Jan 21 21:29:21 crc kubenswrapper[4860]: I0121 21:29:21.872917 4860 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fb0f70a-52fa-4058-ba4d-e824f51de4e5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.391335 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" event={"ID":"2fb0f70a-52fa-4058-ba4d-e824f51de4e5","Type":"ContainerDied","Data":"63a31e2263eaa01db0586e3c629edce2a46e0bd9e5b22dbdb31fe6435ad4e0b1"} Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.392426 4860 scope.go:117] "RemoveContainer" containerID="6b7030f0a95275a40413c5075aa65d26c28f851a4ee3fa231e7fdc8ff55f1f31" Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.392373 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp" Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.431064 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.437782 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9bbf7b7d-cpjrp"] Jan 21 21:29:22 crc kubenswrapper[4860]: I0121 21:29:22.592622 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" path="/var/lib/kubelet/pods/2fb0f70a-52fa-4058-ba4d-e824f51de4e5/volumes" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.621420 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 21 21:29:34 crc kubenswrapper[4860]: E0121 21:29:34.622839 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" containerName="manager" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.622856 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" containerName="manager" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.623086 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb0f70a-52fa-4058-ba4d-e824f51de4e5" containerName="manager" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.624640 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.628133 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.628653 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.628778 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.635284 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.635427 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-wmgts" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.635605 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.635757 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.636383 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.651475 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.692377 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.705463 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.705543 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.705584 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.705913 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706026 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706103 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706133 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706185 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706313 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706414 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.706558 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz6s\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-kube-api-access-ldz6s\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808398 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808475 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808515 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808571 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808618 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808679 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808756 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808801 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808854 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.808993 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldz6s\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-kube-api-access-ldz6s\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.809969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.810153 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.810377 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.811130 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.812044 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.818872 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.818925 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.819289 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.819356 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/921a3b6a1e5497d5a2b6355059e652c1074dd70c464996bf4f141d58163ede36/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.820779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.821491 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.829834 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldz6s\" (UniqueName: \"kubernetes.io/projected/d6da3cbd-8875-47bf-95ab-3734f22fe8a0-kube-api-access-ldz6s\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.856225 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad20e687-326f-4ee5-a3d8-a68fd63c6588\") pod \"rabbitmq-server-0\" (UID: \"d6da3cbd-8875-47bf-95ab-3734f22fe8a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:34 crc kubenswrapper[4860]: I0121 21:29:34.948240 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.275459 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.277756 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.283664 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.283842 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-fzw6v" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.283984 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.284135 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.284167 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.284212 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.284491 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.290036 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327026 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327240 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327282 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327372 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327420 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327455 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2pq\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-kube-api-access-jb2pq\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327501 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.327544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429376 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2pq\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-kube-api-access-jb2pq\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429456 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429496 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429533 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429592 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429627 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429653 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.429701 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.431118 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.431580 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.432301 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.433695 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.433847 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.434579 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.434608 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76c5d0100f32db69d66c4c6223626f75f9443ad0d18dbd5b6acc1e37ff9e9e17/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.437616 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.438998 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.439729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.443749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.465436 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2pq\" (UniqueName: \"kubernetes.io/projected/f04c4d4c-f490-4a77-94fa-bea0fc5a43f3-kube-api-access-jb2pq\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.495890 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.496960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc2fd44-f0fc-4a0f-ba09-0856f88e914c\") pod \"rabbitmq-notifications-server-0\" (UID: \"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:35 crc kubenswrapper[4860]: I0121 21:29:35.609053 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:29:36 crc kubenswrapper[4860]: I0121 21:29:36.153741 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 21 21:29:36 crc kubenswrapper[4860]: I0121 21:29:36.528854 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"d6da3cbd-8875-47bf-95ab-3734f22fe8a0","Type":"ContainerStarted","Data":"87caa1c63bc494486ff3a70884b057b1515cc79262c6b2558e3f8337dea42d4c"} Jan 21 21:29:36 crc kubenswrapper[4860]: I0121 21:29:36.531183 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3","Type":"ContainerStarted","Data":"18036f4b608833536d20fbfe46c5a48ba0697ba5805f428f5919d0ca6b24a9af"} Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.060507 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.062635 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.066618 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-c45jc" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.074685 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.075501 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.081987 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.099144 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.108059 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.120979 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.122614 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.128156 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.128481 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.128708 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-rbkxl" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.138755 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173174 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173311 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-default\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173448 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173728 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5j8\" (UniqueName: \"kubernetes.io/projected/d38c2bac-c957-454f-81e3-db76b749ff2d-kube-api-access-cb5j8\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173794 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-kolla-config\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.173966 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.174530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277219 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277331 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5j8\" (UniqueName: \"kubernetes.io/projected/d38c2bac-c957-454f-81e3-db76b749ff2d-kube-api-access-cb5j8\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277377 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-kolla-config\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277408 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277456 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277501 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277541 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvs42\" (UniqueName: \"kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277615 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277655 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277684 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-default\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277715 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.277743 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.284100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.285805 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-config-data-default\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.287003 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-kolla-config\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.287190 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38c2bac-c957-454f-81e3-db76b749ff2d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.292891 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.292990 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4820c354c3f35bd38a2ab8e6c29cd3e17e3c37d3a271dfe523a56bde12abe372/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.295846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.303484 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38c2bac-c957-454f-81e3-db76b749ff2d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.310111 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5j8\" (UniqueName: \"kubernetes.io/projected/d38c2bac-c957-454f-81e3-db76b749ff2d-kube-api-access-cb5j8\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.379439 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvs42\" (UniqueName: \"kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.379583 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.379669 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.379721 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.379777 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.382024 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.382735 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.393170 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.403235 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.416815 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvs42\" (UniqueName: \"kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42\") pod \"memcached-0\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.427059 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1d6d058-0f9d-4b79-8321-e4ab35bbf969\") pod \"openstack-galera-0\" (UID: \"d38c2bac-c957-454f-81e3-db76b749ff2d\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.498349 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.582707 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.584278 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.587454 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-m6clg" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.614585 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.687298 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc5hk\" (UniqueName: \"kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk\") pod \"kube-state-metrics-0\" (UID: \"7e3962c5-1406-4ba2-8183-f001ebb09796\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.692969 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.791697 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc5hk\" (UniqueName: \"kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk\") pod \"kube-state-metrics-0\" (UID: \"7e3962c5-1406-4ba2-8183-f001ebb09796\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.828728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc5hk\" (UniqueName: \"kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk\") pod \"kube-state-metrics-0\" (UID: \"7e3962c5-1406-4ba2-8183-f001ebb09796\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:29:37 crc kubenswrapper[4860]: I0121 21:29:37.977718 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.523558 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.566365 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"c1817e64-9ce0-4542-a32b-da4c6dd08267","Type":"ContainerStarted","Data":"7a40e35cb875291455cc100e166bd8562d1b64021c0acb4cb1d34c6569cf190e"} Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.699782 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.737379 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.739465 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.750137 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.750594 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.750966 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.751073 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.751184 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-k7fq5" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.760720 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.811598 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.820480 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.820604 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.820861 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.820991 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.821065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.821261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42jv\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-kube-api-access-k42jv\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.821365 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: W0121 21:29:38.836416 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3962c5_1406_4ba2_8183_f001ebb09796.slice/crio-b28869fa5824b9cc0386f007f983c774c996211d185347b621d327c23fd1e7b9 WatchSource:0}: Error finding container b28869fa5824b9cc0386f007f983c774c996211d185347b621d327c23fd1e7b9: Status 404 returned error can't find the container with id b28869fa5824b9cc0386f007f983c774c996211d185347b621d327c23fd1e7b9 Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.923732 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.923849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.923893 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.923958 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.923994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42jv\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-kube-api-access-k42jv\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.924024 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.924075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.927083 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.939439 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.939579 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.951567 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.951697 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.981223 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d48b5afa-e436-4bbb-8131-2bea3323fe51-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.992004 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs"] Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.994210 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:38 crc kubenswrapper[4860]: I0121 21:29:38.998245 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42jv\" (UniqueName: \"kubernetes.io/projected/d48b5afa-e436-4bbb-8131-2bea3323fe51-kube-api-access-k42jv\") pod \"alertmanager-metric-storage-0\" (UID: \"d48b5afa-e436-4bbb-8131-2bea3323fe51\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:38.999495 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs"] Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.009216 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-nrkt9" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.009972 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.070075 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.139678 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.146850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.157705 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkll8\" (UniqueName: \"kubernetes.io/projected/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-kube-api-access-kkll8\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.159696 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.170895 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.171212 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.171356 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-9ssx9" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.172768 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.188334 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.192767 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.192958 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.197944 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.209693 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.261481 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.261559 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.261918 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkll8\" (UniqueName: \"kubernetes.io/projected/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-kube-api-access-kkll8\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262007 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262192 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262382 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262427 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262600 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262709 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.262760 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqhz\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: E0121 21:29:39.262942 4860 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 21 21:29:39 crc kubenswrapper[4860]: E0121 21:29:39.263054 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert podName:6a4226f5-36cd-49b1-bbf3-2d13973b45b5 nodeName:}" failed. No retries permitted until 2026-01-21 21:29:39.763015288 +0000 UTC m=+1271.985193758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert") pod "observability-ui-dashboards-66cbf594b5-qj2fs" (UID: "6a4226f5-36cd-49b1-bbf3-2d13973b45b5") : secret "observability-ui-dashboards" not found Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.326115 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkll8\" (UniqueName: \"kubernetes.io/projected/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-kube-api-access-kkll8\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.376488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.376834 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377242 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377408 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbqhz\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377527 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377610 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.377751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.389158 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.385509 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.389521 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.390287 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.405480 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.420681 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.422321 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.433517 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7fd7fc261f9c8bc632f5a76ba4441601341d00dd6bfb49c87553592a23d2ac9f/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.433594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.434882 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.454428 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbqhz\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.472981 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.485135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.598326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.651112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"d38c2bac-c957-454f-81e3-db76b749ff2d","Type":"ContainerStarted","Data":"0ed5f1f533622c8fb324fed00711a6b543cbde653b3858f17673db078b133e5e"} Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.668163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"7e3962c5-1406-4ba2-8183-f001ebb09796","Type":"ContainerStarted","Data":"b28869fa5824b9cc0386f007f983c774c996211d185347b621d327c23fd1e7b9"} Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.764119 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67b78c6595-xpcmw"] Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.776751 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.791141 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.828777 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67b78c6595-xpcmw"] Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.843415 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.853545 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4226f5-36cd-49b1-bbf3-2d13973b45b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-qj2fs\" (UID: \"6a4226f5-36cd-49b1-bbf3-2d13973b45b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949325 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-service-ca\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-trusted-ca-bundle\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5q2\" (UniqueName: \"kubernetes.io/projected/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-kube-api-access-9z5q2\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949605 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-oauth-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.949645 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-oauth-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:39 crc kubenswrapper[4860]: I0121 21:29:39.983001 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051635 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-service-ca\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051769 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051790 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-trusted-ca-bundle\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051820 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5q2\" (UniqueName: \"kubernetes.io/projected/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-kube-api-access-9z5q2\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051869 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-oauth-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.051904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-oauth-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.052980 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.054260 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-trusted-ca-bundle\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.058109 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.060838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-console-oauth-config\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.061522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-oauth-serving-cert\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.063513 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-service-ca\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.098120 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5q2\" (UniqueName: \"kubernetes.io/projected/6a7b6912-7ee4-4c18-91fc-f1517d20ec5a-kube-api-access-9z5q2\") pod \"console-67b78c6595-xpcmw\" (UID: \"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a\") " pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.110059 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.172580 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.752160 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"d48b5afa-e436-4bbb-8131-2bea3323fe51","Type":"ContainerStarted","Data":"5124866cd2eabbb2879e65f2c8576a990f3608f15aab6e7cf70a0865b89aaa94"} Jan 21 21:29:40 crc kubenswrapper[4860]: I0121 21:29:40.961170 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:29:41 crc kubenswrapper[4860]: I0121 21:29:41.042077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67b78c6595-xpcmw"] Jan 21 21:29:41 crc kubenswrapper[4860]: I0121 21:29:41.049389 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs"] Jan 21 21:29:41 crc kubenswrapper[4860]: I0121 21:29:41.771086 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" event={"ID":"6a4226f5-36cd-49b1-bbf3-2d13973b45b5","Type":"ContainerStarted","Data":"a1a408aa8613c9a23cc44f7477765ad6a174a354d8ba9cdef12e9bcc976ed7ae"} Jan 21 21:29:41 crc kubenswrapper[4860]: I0121 21:29:41.774347 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b78c6595-xpcmw" event={"ID":"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a","Type":"ContainerStarted","Data":"4f0efdbc516cce000b1ce214a8aa015c74c1a83fce027eef3ea09000b0560dcd"} Jan 21 21:29:41 crc kubenswrapper[4860]: I0121 21:29:41.777337 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerStarted","Data":"ae08f84a274565aa5503d8fb7e655815d2924c5ed304762cffae48d0e0748499"} Jan 21 21:29:43 crc kubenswrapper[4860]: I0121 21:29:43.808627 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b78c6595-xpcmw" event={"ID":"6a7b6912-7ee4-4c18-91fc-f1517d20ec5a","Type":"ContainerStarted","Data":"7cff48a8094e990e8fdd929064cf753950c7f9fc940e69db771cddd3fe72e35e"} Jan 21 21:29:43 crc kubenswrapper[4860]: I0121 21:29:43.831017 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67b78c6595-xpcmw" podStartSLOduration=4.830992642 podStartE2EDuration="4.830992642s" podCreationTimestamp="2026-01-21 21:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:29:43.828227547 +0000 UTC m=+1276.050406037" watchObservedRunningTime="2026-01-21 21:29:43.830992642 +0000 UTC m=+1276.053171112" Jan 21 21:29:50 crc kubenswrapper[4860]: I0121 21:29:50.173838 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:50 crc kubenswrapper[4860]: I0121 21:29:50.176144 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:50 crc kubenswrapper[4860]: I0121 21:29:50.182640 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:50 crc kubenswrapper[4860]: I0121 21:29:50.879247 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67b78c6595-xpcmw" Jan 21 21:29:50 crc kubenswrapper[4860]: I0121 21:29:50.945420 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.504165 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.505486 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldz6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_watcher-kuttl-default(d6da3cbd-8875-47bf-95ab-3734f22fe8a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.506799 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="d6da3cbd-8875-47bf-95ab-3734f22fe8a0" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.913811 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="d6da3cbd-8875-47bf-95ab-3734f22fe8a0" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.992152 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.992274 4860 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.992545 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=watcher-kuttl-default],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zc5hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_watcher-kuttl-default(7e3962c5-1406-4ba2-8183-f001ebb09796): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:29:52 crc kubenswrapper[4860]: E0121 21:29:52.994368 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.030181 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.030425 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb2pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(f04c4d4c-f490-4a77-94fa-bea0fc5a43f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.031611 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="f04c4d4c-f490-4a77-94fa-bea0fc5a43f3" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.922328 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.923482 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n7chd9h576hddh75hffh545h65dh5bdh5c9h5dch54ch5dbh5b6h68dh558h87h96h567h664h574h554h549h675h88h57dh7ch68chcfhb7h568h68cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvs42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_watcher-kuttl-default(c1817e64-9ce0-4542-a32b-da4c6dd08267): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.923908 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="f04c4d4c-f490-4a77-94fa-bea0fc5a43f3" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.924017 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" Jan 21 21:29:53 crc kubenswrapper[4860]: E0121 21:29:53.924604 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/memcached-0" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" Jan 21 21:29:54 crc kubenswrapper[4860]: E0121 21:29:54.956473 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="watcher-kuttl-default/memcached-0" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" Jan 21 21:29:58 crc kubenswrapper[4860]: I0121 21:29:58.052270 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"d38c2bac-c957-454f-81e3-db76b749ff2d","Type":"ContainerStarted","Data":"6f6bc6d2ae042ea3d57d2424365d66dcb4376429639ce9593ede485f95717f2f"} Jan 21 21:29:58 crc kubenswrapper[4860]: I0121 21:29:58.054667 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" event={"ID":"6a4226f5-36cd-49b1-bbf3-2d13973b45b5","Type":"ContainerStarted","Data":"1c80ae68459dd00691acb3202d8204f9c6c15e452614a7338f290c9af2984e15"} Jan 21 21:29:58 crc kubenswrapper[4860]: I0121 21:29:58.099827 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-qj2fs" podStartSLOduration=7.276666172 podStartE2EDuration="20.099801138s" podCreationTimestamp="2026-01-21 21:29:38 +0000 UTC" firstStartedPulling="2026-01-21 21:29:41.080253001 +0000 UTC m=+1273.302431471" lastFinishedPulling="2026-01-21 21:29:53.903387967 +0000 UTC m=+1286.125566437" observedRunningTime="2026-01-21 21:29:58.097021203 +0000 UTC m=+1290.319199683" watchObservedRunningTime="2026-01-21 21:29:58.099801138 +0000 UTC m=+1290.321979608" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.077459 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"d48b5afa-e436-4bbb-8131-2bea3323fe51","Type":"ContainerStarted","Data":"386c6b0d476644a9a8f298916eb6aea6113d04c84f6e16c255539c031aaff5f6"} Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.171306 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk"] Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.174011 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.179244 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.179537 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.195242 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk"] Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.249475 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpcp\" (UniqueName: \"kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.250056 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.250169 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.354044 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.354169 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.354510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scpcp\" (UniqueName: \"kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.355457 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.367515 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.380458 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpcp\" (UniqueName: \"kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp\") pod \"collect-profiles-29483850-bv5hk\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.495327 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:00 crc kubenswrapper[4860]: I0121 21:30:00.976014 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk"] Jan 21 21:30:00 crc kubenswrapper[4860]: W0121 21:30:00.992944 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e2d204d_f5ea_48b7_a0e2_6e6c6c783b5f.slice/crio-b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380 WatchSource:0}: Error finding container b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380: Status 404 returned error can't find the container with id b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380 Jan 21 21:30:01 crc kubenswrapper[4860]: I0121 21:30:01.092042 4860 generic.go:334] "Generic (PLEG): container finished" podID="d38c2bac-c957-454f-81e3-db76b749ff2d" containerID="6f6bc6d2ae042ea3d57d2424365d66dcb4376429639ce9593ede485f95717f2f" exitCode=0 Jan 21 21:30:01 crc kubenswrapper[4860]: I0121 21:30:01.092107 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"d38c2bac-c957-454f-81e3-db76b749ff2d","Type":"ContainerDied","Data":"6f6bc6d2ae042ea3d57d2424365d66dcb4376429639ce9593ede485f95717f2f"} Jan 21 21:30:01 crc kubenswrapper[4860]: I0121 21:30:01.093797 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" event={"ID":"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f","Type":"ContainerStarted","Data":"b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380"} Jan 21 21:30:01 crc kubenswrapper[4860]: I0121 21:30:01.096374 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerStarted","Data":"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227"} Jan 21 21:30:01 crc kubenswrapper[4860]: E0121 21:30:01.479424 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e2d204d_f5ea_48b7_a0e2_6e6c6c783b5f.slice/crio-d7f302a045eb40e1013225bd03e1fbeab7054b9e52dca4de95ed4d387bcc74bc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.107787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"d38c2bac-c957-454f-81e3-db76b749ff2d","Type":"ContainerStarted","Data":"eab87c1122f8402d508b6b9445e548bb2d3444d9a9091a606bd09607ba29a3e0"} Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.111791 4860 generic.go:334] "Generic (PLEG): container finished" podID="5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" containerID="d7f302a045eb40e1013225bd03e1fbeab7054b9e52dca4de95ed4d387bcc74bc" exitCode=0 Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.112707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" event={"ID":"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f","Type":"ContainerDied","Data":"d7f302a045eb40e1013225bd03e1fbeab7054b9e52dca4de95ed4d387bcc74bc"} Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.117739 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.117814 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:30:02 crc kubenswrapper[4860]: I0121 21:30:02.154085 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=8.151502032 podStartE2EDuration="26.154056119s" podCreationTimestamp="2026-01-21 21:29:36 +0000 UTC" firstStartedPulling="2026-01-21 21:29:38.732628189 +0000 UTC m=+1270.954806649" lastFinishedPulling="2026-01-21 21:29:56.735182266 +0000 UTC m=+1288.957360736" observedRunningTime="2026-01-21 21:30:02.142435237 +0000 UTC m=+1294.364613727" watchObservedRunningTime="2026-01-21 21:30:02.154056119 +0000 UTC m=+1294.376234599" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.764557 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.867927 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume\") pod \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.868053 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume\") pod \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.868194 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scpcp\" (UniqueName: \"kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp\") pod \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\" (UID: \"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f\") " Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.868431 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" (UID: "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.868573 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.886612 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp" (OuterVolumeSpecName: "kube-api-access-scpcp") pod "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" (UID: "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f"). InnerVolumeSpecName "kube-api-access-scpcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.886655 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" (UID: "5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.970777 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scpcp\" (UniqueName: \"kubernetes.io/projected/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-kube-api-access-scpcp\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:03 crc kubenswrapper[4860]: I0121 21:30:03.970823 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:04 crc kubenswrapper[4860]: I0121 21:30:04.131109 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" event={"ID":"5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f","Type":"ContainerDied","Data":"b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380"} Jan 21 21:30:04 crc kubenswrapper[4860]: I0121 21:30:04.131167 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4b355580f3d5ba3340d95cccbcb07be90caddfe158f8dab1778dad0bd9ca380" Jan 21 21:30:04 crc kubenswrapper[4860]: I0121 21:30:04.131207 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.175861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"7e3962c5-1406-4ba2-8183-f001ebb09796","Type":"ContainerStarted","Data":"f693a81a1e0ac6d092dd33ba32dc026fe00fda900afa7aa5565f4d896c9c9e85"} Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.176952 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.178357 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"c1817e64-9ce0-4542-a32b-da4c6dd08267","Type":"ContainerStarted","Data":"2f650d5d3612430dfd43d6115a2d8e7645b9260515dd6ad2a51ea8d741fd7530"} Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.178733 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.181238 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"d6da3cbd-8875-47bf-95ab-3734f22fe8a0","Type":"ContainerStarted","Data":"69063a172a0fc7e0653168056becddf16495a236fd2b0c0fc15f5b2fe54eb630"} Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.203845 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=3.087790093 podStartE2EDuration="30.203814334s" podCreationTimestamp="2026-01-21 21:29:37 +0000 UTC" firstStartedPulling="2026-01-21 21:29:38.851600426 +0000 UTC m=+1271.073778896" lastFinishedPulling="2026-01-21 21:30:05.967624667 +0000 UTC m=+1298.189803137" observedRunningTime="2026-01-21 21:30:07.194262746 +0000 UTC m=+1299.416441216" watchObservedRunningTime="2026-01-21 21:30:07.203814334 +0000 UTC m=+1299.425992814" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.269205 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.803173556 podStartE2EDuration="30.269172742s" podCreationTimestamp="2026-01-21 21:29:37 +0000 UTC" firstStartedPulling="2026-01-21 21:29:38.557686418 +0000 UTC m=+1270.779864888" lastFinishedPulling="2026-01-21 21:30:06.023685604 +0000 UTC m=+1298.245864074" observedRunningTime="2026-01-21 21:30:07.261711538 +0000 UTC m=+1299.483890008" watchObservedRunningTime="2026-01-21 21:30:07.269172742 +0000 UTC m=+1299.491351222" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.693809 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.693909 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:30:07 crc kubenswrapper[4860]: I0121 21:30:07.801702 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:30:08 crc kubenswrapper[4860]: I0121 21:30:08.192950 4860 generic.go:334] "Generic (PLEG): container finished" podID="d48b5afa-e436-4bbb-8131-2bea3323fe51" containerID="386c6b0d476644a9a8f298916eb6aea6113d04c84f6e16c255539c031aaff5f6" exitCode=0 Jan 21 21:30:08 crc kubenswrapper[4860]: I0121 21:30:08.193023 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"d48b5afa-e436-4bbb-8131-2bea3323fe51","Type":"ContainerDied","Data":"386c6b0d476644a9a8f298916eb6aea6113d04c84f6e16c255539c031aaff5f6"} Jan 21 21:30:08 crc kubenswrapper[4860]: I0121 21:30:08.195144 4860 generic.go:334] "Generic (PLEG): container finished" podID="856e4581-4208-4131-94e2-e572ed382903" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" exitCode=0 Jan 21 21:30:08 crc kubenswrapper[4860]: I0121 21:30:08.195208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerDied","Data":"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227"} Jan 21 21:30:08 crc kubenswrapper[4860]: I0121 21:30:08.298159 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Jan 21 21:30:09 crc kubenswrapper[4860]: I0121 21:30:09.203600 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3","Type":"ContainerStarted","Data":"30bd5ba5c766a382a34332dc4f068bbe05065b8446ecfe22f7a29c0fecff1bfc"} Jan 21 21:30:11 crc kubenswrapper[4860]: I0121 21:30:11.224752 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"d48b5afa-e436-4bbb-8131-2bea3323fe51","Type":"ContainerStarted","Data":"c8c8901d10e4bef6b117bb4c1d3b9e41aeeed2631bed3438b114cb1e3e58440e"} Jan 21 21:30:12 crc kubenswrapper[4860]: I0121 21:30:12.500722 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.972214 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-9nb9k"] Jan 21 21:30:15 crc kubenswrapper[4860]: E0121 21:30:15.973857 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" containerName="collect-profiles" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.973882 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" containerName="collect-profiles" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.974208 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" containerName="collect-profiles" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.976456 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.979549 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 21 21:30:15 crc kubenswrapper[4860]: I0121 21:30:15.996432 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-9nb9k"] Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.071045 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b5dd98db7-zplft" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" containerName="console" containerID="cri-o://26505744a70734aaa7e06e9beaae5268752e26ae9259cffa8ec5822412cff25b" gracePeriod=15 Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.165197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.165279 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9vk\" (UniqueName: \"kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.266712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.266792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9vk\" (UniqueName: \"kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.268134 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.291437 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9vk\" (UniqueName: \"kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk\") pod \"root-account-create-update-9nb9k\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.314019 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.778835 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-9nb9k"] Jan 21 21:30:16 crc kubenswrapper[4860]: W0121 21:30:16.785992 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabcab561_13de_4aa9_b176_f82be46c8107.slice/crio-b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55 WatchSource:0}: Error finding container b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55: Status 404 returned error can't find the container with id b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55 Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.917323 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-kl96t"] Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.918493 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:16 crc kubenswrapper[4860]: I0121 21:30:16.932205 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-kl96t"] Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.079438 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj"] Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.080702 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.083319 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.084967 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.085031 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdh5\" (UniqueName: \"kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.153631 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj"] Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.187878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsfc\" (UniqueName: \"kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.188041 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.188151 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.188191 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdh5\" (UniqueName: \"kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.189592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.212367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdh5\" (UniqueName: \"kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5\") pod \"keystone-db-create-kl96t\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.249428 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.289614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsfc\" (UniqueName: \"kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.290104 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.291066 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.293206 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-9nb9k" event={"ID":"abcab561-13de-4aa9-b176-f82be46c8107","Type":"ContainerStarted","Data":"b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55"} Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.418935 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsfc\" (UniqueName: \"kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc\") pod \"keystone-68ee-account-create-update-6j6xj\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.710887 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:17 crc kubenswrapper[4860]: I0121 21:30:17.814047 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-kl96t"] Jan 21 21:30:17 crc kubenswrapper[4860]: W0121 21:30:17.828788 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050d336c_1842_498d_aa18_411b57a080eb.slice/crio-71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd WatchSource:0}: Error finding container 71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd: Status 404 returned error can't find the container with id 71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.068505 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.353954 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5dd98db7-zplft_7882576f-1287-498d-9ed2-e06eef1a5212/console/0.log" Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.354040 4860 generic.go:334] "Generic (PLEG): container finished" podID="7882576f-1287-498d-9ed2-e06eef1a5212" containerID="26505744a70734aaa7e06e9beaae5268752e26ae9259cffa8ec5822412cff25b" exitCode=2 Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.354199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5dd98db7-zplft" event={"ID":"7882576f-1287-498d-9ed2-e06eef1a5212","Type":"ContainerDied","Data":"26505744a70734aaa7e06e9beaae5268752e26ae9259cffa8ec5822412cff25b"} Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.365172 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-9nb9k" event={"ID":"abcab561-13de-4aa9-b176-f82be46c8107","Type":"ContainerStarted","Data":"4e387828e31e1e691b60df2d27d0439e678d4e4a0274734ec1de57e3d2bd4ca2"} Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.367912 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-kl96t" event={"ID":"050d336c-1842-498d-aa18-411b57a080eb","Type":"ContainerStarted","Data":"71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd"} Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.373782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"d48b5afa-e436-4bbb-8131-2bea3323fe51","Type":"ContainerStarted","Data":"79eebb682d38cc2f96d56c009a96c258af7ebad93fb42a99d1685474fb1d368f"} Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.374602 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.381446 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.400135 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj"] Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.404468 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/root-account-create-update-9nb9k" podStartSLOduration=3.404432209 podStartE2EDuration="3.404432209s" podCreationTimestamp="2026-01-21 21:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:30:18.386559132 +0000 UTC m=+1310.608737602" watchObservedRunningTime="2026-01-21 21:30:18.404432209 +0000 UTC m=+1310.626610689" Jan 21 21:30:18 crc kubenswrapper[4860]: I0121 21:30:18.417589 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=10.01647502 podStartE2EDuration="40.417562518s" podCreationTimestamp="2026-01-21 21:29:38 +0000 UTC" firstStartedPulling="2026-01-21 21:29:40.130099155 +0000 UTC m=+1272.352277625" lastFinishedPulling="2026-01-21 21:30:10.531186653 +0000 UTC m=+1302.753365123" observedRunningTime="2026-01-21 21:30:18.413283415 +0000 UTC m=+1310.635461895" watchObservedRunningTime="2026-01-21 21:30:18.417562518 +0000 UTC m=+1310.639740988" Jan 21 21:30:19 crc kubenswrapper[4860]: I0121 21:30:19.388138 4860 generic.go:334] "Generic (PLEG): container finished" podID="050d336c-1842-498d-aa18-411b57a080eb" containerID="a773c7d784ca665d5962c6c70d34a7d437b030dce8875e4fb436e3826e44a9df" exitCode=0 Jan 21 21:30:19 crc kubenswrapper[4860]: I0121 21:30:19.388221 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-kl96t" event={"ID":"050d336c-1842-498d-aa18-411b57a080eb","Type":"ContainerDied","Data":"a773c7d784ca665d5962c6c70d34a7d437b030dce8875e4fb436e3826e44a9df"} Jan 21 21:30:19 crc kubenswrapper[4860]: I0121 21:30:19.390474 4860 generic.go:334] "Generic (PLEG): container finished" podID="abcab561-13de-4aa9-b176-f82be46c8107" containerID="4e387828e31e1e691b60df2d27d0439e678d4e4a0274734ec1de57e3d2bd4ca2" exitCode=0 Jan 21 21:30:19 crc kubenswrapper[4860]: I0121 21:30:19.390790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-9nb9k" event={"ID":"abcab561-13de-4aa9-b176-f82be46c8107","Type":"ContainerDied","Data":"4e387828e31e1e691b60df2d27d0439e678d4e4a0274734ec1de57e3d2bd4ca2"} Jan 21 21:30:21 crc kubenswrapper[4860]: I0121 21:30:21.791110 4860 patch_prober.go:28] interesting pod/console-6b5dd98db7-zplft container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 21:30:21 crc kubenswrapper[4860]: I0121 21:30:21.791539 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6b5dd98db7-zplft" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" containerName="console" probeResult="failure" output="Get \"https://10.217.0.49:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 21:30:23 crc kubenswrapper[4860]: E0121 21:30:23.248444 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 21 21:30:23 crc kubenswrapper[4860]: E0121 21:30:23.248756 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_watcher-kuttl-default(856e4581-4208-4131-94e2-e572ed382903): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.443090 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-9nb9k" event={"ID":"abcab561-13de-4aa9-b176-f82be46c8107","Type":"ContainerDied","Data":"b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55"} Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.443499 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b09d63723f3675f9dfe98821ad28380127a59bbdae87675cdfe3531cf3325d55" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.445097 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" event={"ID":"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16","Type":"ContainerStarted","Data":"337287ac01c0bc03d6cf49ef46069205d5f8e770b5c3829f43ae7b680fb20bb0"} Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.448841 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-kl96t" event={"ID":"050d336c-1842-498d-aa18-411b57a080eb","Type":"ContainerDied","Data":"71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd"} Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.448866 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f3a5d28295f09d0e08849f4b30f0fa2d981e560605069be901e2e27489a7dd" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.455015 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5dd98db7-zplft_7882576f-1287-498d-9ed2-e06eef1a5212/console/0.log" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.455029 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5dd98db7-zplft_7882576f-1287-498d-9ed2-e06eef1a5212/console/0.log" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.455058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5dd98db7-zplft" event={"ID":"7882576f-1287-498d-9ed2-e06eef1a5212","Type":"ContainerDied","Data":"f4c4841152463e2a91610cf33c56961adf0957f6dabd026cc97b79bf51e5d86e"} Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.455079 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c4841152463e2a91610cf33c56961adf0957f6dabd026cc97b79bf51e5d86e" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.455111 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.490788 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.507917 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.507991 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508025 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm9vk\" (UniqueName: \"kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk\") pod \"abcab561-13de-4aa9-b176-f82be46c8107\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508148 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts\") pod \"abcab561-13de-4aa9-b176-f82be46c8107\" (UID: \"abcab561-13de-4aa9-b176-f82be46c8107\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508188 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjc7b\" (UniqueName: \"kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508209 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508256 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508290 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.508379 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.509532 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abcab561-13de-4aa9-b176-f82be46c8107" (UID: "abcab561-13de-4aa9-b176-f82be46c8107"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.510011 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.510070 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca" (OuterVolumeSpecName: "service-ca") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.510411 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.524181 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b" (OuterVolumeSpecName: "kube-api-access-xjc7b") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "kube-api-access-xjc7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.524984 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.526166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.538858 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk" (OuterVolumeSpecName: "kube-api-access-jm9vk") pod "abcab561-13de-4aa9-b176-f82be46c8107" (UID: "abcab561-13de-4aa9-b176-f82be46c8107"). InnerVolumeSpecName "kube-api-access-jm9vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.609762 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config\") pod \"7882576f-1287-498d-9ed2-e06eef1a5212\" (UID: \"7882576f-1287-498d-9ed2-e06eef1a5212\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.609851 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzdh5\" (UniqueName: \"kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5\") pod \"050d336c-1842-498d-aa18-411b57a080eb\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610232 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610253 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610265 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm9vk\" (UniqueName: \"kubernetes.io/projected/abcab561-13de-4aa9-b176-f82be46c8107-kube-api-access-jm9vk\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610279 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abcab561-13de-4aa9-b176-f82be46c8107-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610288 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjc7b\" (UniqueName: \"kubernetes.io/projected/7882576f-1287-498d-9ed2-e06eef1a5212-kube-api-access-xjc7b\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610298 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610307 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.610316 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7882576f-1287-498d-9ed2-e06eef1a5212-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.611449 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config" (OuterVolumeSpecName: "console-config") pod "7882576f-1287-498d-9ed2-e06eef1a5212" (UID: "7882576f-1287-498d-9ed2-e06eef1a5212"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.616693 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5" (OuterVolumeSpecName: "kube-api-access-vzdh5") pod "050d336c-1842-498d-aa18-411b57a080eb" (UID: "050d336c-1842-498d-aa18-411b57a080eb"). InnerVolumeSpecName "kube-api-access-vzdh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.712835 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts\") pod \"050d336c-1842-498d-aa18-411b57a080eb\" (UID: \"050d336c-1842-498d-aa18-411b57a080eb\") " Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.713255 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7882576f-1287-498d-9ed2-e06eef1a5212-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.713270 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzdh5\" (UniqueName: \"kubernetes.io/projected/050d336c-1842-498d-aa18-411b57a080eb-kube-api-access-vzdh5\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.713995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "050d336c-1842-498d-aa18-411b57a080eb" (UID: "050d336c-1842-498d-aa18-411b57a080eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:23 crc kubenswrapper[4860]: I0121 21:30:23.814702 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050d336c-1842-498d-aa18-411b57a080eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.492820 4860 generic.go:334] "Generic (PLEG): container finished" podID="b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" containerID="a1ed4408268c4cab543ece235e91c256c7e59e9440b42c7297fd789deb71d16e" exitCode=0 Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.493172 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-9nb9k" Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.493191 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-kl96t" Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.492875 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" event={"ID":"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16","Type":"ContainerDied","Data":"a1ed4408268c4cab543ece235e91c256c7e59e9440b42c7297fd789deb71d16e"} Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.493301 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5dd98db7-zplft" Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.547188 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:30:24 crc kubenswrapper[4860]: I0121 21:30:24.607718 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b5dd98db7-zplft"] Jan 21 21:30:25 crc kubenswrapper[4860]: I0121 21:30:25.921633 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.105663 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfsfc\" (UniqueName: \"kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc\") pod \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.107207 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts\") pod \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\" (UID: \"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16\") " Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.108526 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" (UID: "b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.112991 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc" (OuterVolumeSpecName: "kube-api-access-lfsfc") pod "b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" (UID: "b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16"). InnerVolumeSpecName "kube-api-access-lfsfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.210150 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfsfc\" (UniqueName: \"kubernetes.io/projected/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-kube-api-access-lfsfc\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.210209 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.517405 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" event={"ID":"b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16","Type":"ContainerDied","Data":"337287ac01c0bc03d6cf49ef46069205d5f8e770b5c3829f43ae7b680fb20bb0"} Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.517846 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337287ac01c0bc03d6cf49ef46069205d5f8e770b5c3829f43ae7b680fb20bb0" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.517534 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj" Jan 21 21:30:26 crc kubenswrapper[4860]: I0121 21:30:26.588439 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" path="/var/lib/kubelet/pods/7882576f-1287-498d-9ed2-e06eef1a5212/volumes" Jan 21 21:30:27 crc kubenswrapper[4860]: I0121 21:30:27.528451 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerStarted","Data":"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9"} Jan 21 21:30:30 crc kubenswrapper[4860]: E0121 21:30:30.576736 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" Jan 21 21:30:31 crc kubenswrapper[4860]: I0121 21:30:31.584432 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerStarted","Data":"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd"} Jan 21 21:30:31 crc kubenswrapper[4860]: E0121 21:30:31.587090 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" Jan 21 21:30:32 crc kubenswrapper[4860]: I0121 21:30:32.211972 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:30:32 crc kubenswrapper[4860]: I0121 21:30:32.212471 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:30:32 crc kubenswrapper[4860]: E0121 21:30:32.620763 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" Jan 21 21:30:34 crc kubenswrapper[4860]: I0121 21:30:34.936792 4860 scope.go:117] "RemoveContainer" containerID="26505744a70734aaa7e06e9beaae5268752e26ae9259cffa8ec5822412cff25b" Jan 21 21:30:39 crc kubenswrapper[4860]: I0121 21:30:39.674597 4860 generic.go:334] "Generic (PLEG): container finished" podID="d6da3cbd-8875-47bf-95ab-3734f22fe8a0" containerID="69063a172a0fc7e0653168056becddf16495a236fd2b0c0fc15f5b2fe54eb630" exitCode=0 Jan 21 21:30:39 crc kubenswrapper[4860]: I0121 21:30:39.674721 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"d6da3cbd-8875-47bf-95ab-3734f22fe8a0","Type":"ContainerDied","Data":"69063a172a0fc7e0653168056becddf16495a236fd2b0c0fc15f5b2fe54eb630"} Jan 21 21:30:40 crc kubenswrapper[4860]: I0121 21:30:40.686094 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"d6da3cbd-8875-47bf-95ab-3734f22fe8a0","Type":"ContainerStarted","Data":"75c073c20416d4b95946d4794141300d17374a2f37a1b7adb8effbe65f240de0"} Jan 21 21:30:40 crc kubenswrapper[4860]: I0121 21:30:40.687080 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:30:40 crc kubenswrapper[4860]: I0121 21:30:40.720631 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=38.087992285 podStartE2EDuration="1m7.720588028s" podCreationTimestamp="2026-01-21 21:29:33 +0000 UTC" firstStartedPulling="2026-01-21 21:29:35.516839676 +0000 UTC m=+1267.739018146" lastFinishedPulling="2026-01-21 21:30:05.149435419 +0000 UTC m=+1297.371613889" observedRunningTime="2026-01-21 21:30:40.713410554 +0000 UTC m=+1332.935589044" watchObservedRunningTime="2026-01-21 21:30:40.720588028 +0000 UTC m=+1332.942766498" Jan 21 21:30:41 crc kubenswrapper[4860]: I0121 21:30:41.699479 4860 generic.go:334] "Generic (PLEG): container finished" podID="f04c4d4c-f490-4a77-94fa-bea0fc5a43f3" containerID="30bd5ba5c766a382a34332dc4f068bbe05065b8446ecfe22f7a29c0fecff1bfc" exitCode=0 Jan 21 21:30:41 crc kubenswrapper[4860]: I0121 21:30:41.699554 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3","Type":"ContainerDied","Data":"30bd5ba5c766a382a34332dc4f068bbe05065b8446ecfe22f7a29c0fecff1bfc"} Jan 21 21:30:42 crc kubenswrapper[4860]: I0121 21:30:42.715350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"f04c4d4c-f490-4a77-94fa-bea0fc5a43f3","Type":"ContainerStarted","Data":"220fb1832b5b6034d82b25973ebbf702e09b7990f4916a23f95c93caf291718c"} Jan 21 21:30:42 crc kubenswrapper[4860]: I0121 21:30:42.716345 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:30:54 crc kubenswrapper[4860]: I0121 21:30:54.835328 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerStarted","Data":"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2"} Jan 21 21:30:54 crc kubenswrapper[4860]: I0121 21:30:54.877077 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371955.977726 podStartE2EDuration="1m20.877050372s" podCreationTimestamp="2026-01-21 21:29:34 +0000 UTC" firstStartedPulling="2026-01-21 21:29:36.188071153 +0000 UTC m=+1268.410249623" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:30:42.75081165 +0000 UTC m=+1334.972990120" watchObservedRunningTime="2026-01-21 21:30:54.877050372 +0000 UTC m=+1347.099228862" Jan 21 21:30:54 crc kubenswrapper[4860]: I0121 21:30:54.877268 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.181422627 podStartE2EDuration="1m16.877259798s" podCreationTimestamp="2026-01-21 21:29:38 +0000 UTC" firstStartedPulling="2026-01-21 21:29:41.084340726 +0000 UTC m=+1273.306519196" lastFinishedPulling="2026-01-21 21:30:53.780177897 +0000 UTC m=+1346.002356367" observedRunningTime="2026-01-21 21:30:54.870508058 +0000 UTC m=+1347.092686548" watchObservedRunningTime="2026-01-21 21:30:54.877259798 +0000 UTC m=+1347.099438288" Jan 21 21:30:54 crc kubenswrapper[4860]: I0121 21:30:54.953221 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.611515 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-8wdfp"] Jan 21 21:30:55 crc kubenswrapper[4860]: E0121 21:30:55.612088 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcab561-13de-4aa9-b176-f82be46c8107" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612113 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcab561-13de-4aa9-b176-f82be46c8107" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: E0121 21:30:55.612149 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: E0121 21:30:55.612171 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" containerName="console" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612179 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" containerName="console" Jan 21 21:30:55 crc kubenswrapper[4860]: E0121 21:30:55.612203 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050d336c-1842-498d-aa18-411b57a080eb" containerName="mariadb-database-create" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612208 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="050d336c-1842-498d-aa18-411b57a080eb" containerName="mariadb-database-create" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612443 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7882576f-1287-498d-9ed2-e06eef1a5212" containerName="console" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612464 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612474 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcab561-13de-4aa9-b176-f82be46c8107" containerName="mariadb-account-create-update" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.612484 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="050d336c-1842-498d-aa18-411b57a080eb" containerName="mariadb-database-create" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.613194 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.613532 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.616650 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.617185 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.617373 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.627253 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d22jf" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.642085 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-8wdfp"] Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.725395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5ng\" (UniqueName: \"kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.725865 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.725902 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.827888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.827980 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.828106 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr5ng\" (UniqueName: \"kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.834963 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.838005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.846368 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr5ng\" (UniqueName: \"kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng\") pod \"keystone-db-sync-8wdfp\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:55 crc kubenswrapper[4860]: I0121 21:30:55.940738 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:30:56 crc kubenswrapper[4860]: I0121 21:30:56.465512 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-8wdfp"] Jan 21 21:30:56 crc kubenswrapper[4860]: I0121 21:30:56.853351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" event={"ID":"695edaa1-d556-4a7c-bb54-fa518455069a","Type":"ContainerStarted","Data":"476daf42994dc673a9b80e118006d3e4e2023c27b9649d9e1e7f32062af83918"} Jan 21 21:30:59 crc kubenswrapper[4860]: I0121 21:30:59.792746 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.103767 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.103857 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.103924 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.104930 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.105068 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6" gracePeriod=600 Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.910076 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6" exitCode=0 Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.910151 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6"} Jan 21 21:31:02 crc kubenswrapper[4860]: I0121 21:31:02.910478 4860 scope.go:117] "RemoveContainer" containerID="6f0b3fc12fa9ba32ff6e2eb0239bbfea7864555f13d17d499448eef7cdde4887" Jan 21 21:31:05 crc kubenswrapper[4860]: I0121 21:31:05.946263 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97"} Jan 21 21:31:05 crc kubenswrapper[4860]: I0121 21:31:05.948314 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" event={"ID":"695edaa1-d556-4a7c-bb54-fa518455069a","Type":"ContainerStarted","Data":"4d2faa002ef1a13f9a70c36d0fe905f19370c4899d739453f973a734a1998317"} Jan 21 21:31:06 crc kubenswrapper[4860]: I0121 21:31:06.007761 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" podStartSLOduration=2.690154675 podStartE2EDuration="11.007728543s" podCreationTimestamp="2026-01-21 21:30:55 +0000 UTC" firstStartedPulling="2026-01-21 21:30:56.480112777 +0000 UTC m=+1348.702291237" lastFinishedPulling="2026-01-21 21:31:04.797686635 +0000 UTC m=+1357.019865105" observedRunningTime="2026-01-21 21:31:06.004026098 +0000 UTC m=+1358.226204568" watchObservedRunningTime="2026-01-21 21:31:06.007728543 +0000 UTC m=+1358.229907013" Jan 21 21:31:09 crc kubenswrapper[4860]: I0121 21:31:09.793484 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:09 crc kubenswrapper[4860]: I0121 21:31:09.796951 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:10 crc kubenswrapper[4860]: I0121 21:31:10.125280 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.092698 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.093773 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="config-reloader" containerID="cri-o://36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" gracePeriod=600 Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.093874 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="thanos-sidecar" containerID="cri-o://0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" gracePeriod=600 Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.093833 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="prometheus" containerID="cri-o://be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" gracePeriod=600 Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.154703 4860 generic.go:334] "Generic (PLEG): container finished" podID="695edaa1-d556-4a7c-bb54-fa518455069a" containerID="4d2faa002ef1a13f9a70c36d0fe905f19370c4899d739453f973a734a1998317" exitCode=0 Jan 21 21:31:13 crc kubenswrapper[4860]: I0121 21:31:13.154778 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" event={"ID":"695edaa1-d556-4a7c-bb54-fa518455069a","Type":"ContainerDied","Data":"4d2faa002ef1a13f9a70c36d0fe905f19370c4899d739453f973a734a1998317"} Jan 21 21:31:13 crc kubenswrapper[4860]: E0121 21:31:13.351475 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856e4581_4208_4131_94e2_e572ed382903.slice/crio-be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856e4581_4208_4131_94e2_e572ed382903.slice/crio-conmon-be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856e4581_4208_4131_94e2_e572ed382903.slice/crio-conmon-0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.078245 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.170634 4860 generic.go:334] "Generic (PLEG): container finished" podID="856e4581-4208-4131-94e2-e572ed382903" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" exitCode=0 Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.170680 4860 generic.go:334] "Generic (PLEG): container finished" podID="856e4581-4208-4131-94e2-e572ed382903" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" exitCode=0 Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.170694 4860 generic.go:334] "Generic (PLEG): container finished" podID="856e4581-4208-4131-94e2-e572ed382903" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" exitCode=0 Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171111 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerDied","Data":"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2"} Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171235 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerDied","Data":"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd"} Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerDied","Data":"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9"} Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171275 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"856e4581-4208-4131-94e2-e572ed382903","Type":"ContainerDied","Data":"ae08f84a274565aa5503d8fb7e655815d2924c5ed304762cffae48d0e0748499"} Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171372 4860 scope.go:117] "RemoveContainer" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.171725 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.217446 4860 scope.go:117] "RemoveContainer" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.225965 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbqhz\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.226353 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.226466 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.227109 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.227153 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.227568 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.227724 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.228015 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.228155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.228767 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.228968 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config\") pod \"856e4581-4208-4131-94e2-e572ed382903\" (UID: \"856e4581-4208-4131-94e2-e572ed382903\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.229446 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.229801 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.230328 4860 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.230417 4860 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.230498 4860 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/856e4581-4208-4131-94e2-e572ed382903-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.240851 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out" (OuterVolumeSpecName: "config-out") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.241068 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz" (OuterVolumeSpecName: "kube-api-access-kbqhz") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "kube-api-access-kbqhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.240872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config" (OuterVolumeSpecName: "config") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.246283 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.252328 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.269151 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.289363 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config" (OuterVolumeSpecName: "web-config") pod "856e4581-4208-4131-94e2-e572ed382903" (UID: "856e4581-4208-4131-94e2-e572ed382903"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.324908 4860 scope.go:117] "RemoveContainer" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338306 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338372 4860 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/856e4581-4208-4131-94e2-e572ed382903-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338502 4860 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") on node \"crc\" " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338524 4860 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338559 4860 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338571 4860 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/856e4581-4208-4131-94e2-e572ed382903-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.338583 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbqhz\" (UniqueName: \"kubernetes.io/projected/856e4581-4208-4131-94e2-e572ed382903-kube-api-access-kbqhz\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.387707 4860 scope.go:117] "RemoveContainer" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.388197 4860 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.388795 4860 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd") on node "crc" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.428014 4860 scope.go:117] "RemoveContainer" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.431051 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": container with ID starting with be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2 not found: ID does not exist" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.431148 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2"} err="failed to get container status \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": rpc error: code = NotFound desc = could not find container \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": container with ID starting with be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.431416 4860 scope.go:117] "RemoveContainer" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.434006 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": container with ID starting with 0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd not found: ID does not exist" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.434066 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd"} err="failed to get container status \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": rpc error: code = NotFound desc = could not find container \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": container with ID starting with 0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.434112 4860 scope.go:117] "RemoveContainer" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.434678 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": container with ID starting with 36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9 not found: ID does not exist" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.434716 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9"} err="failed to get container status \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": rpc error: code = NotFound desc = could not find container \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": container with ID starting with 36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.434760 4860 scope.go:117] "RemoveContainer" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.435176 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": container with ID starting with 5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227 not found: ID does not exist" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435243 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227"} err="failed to get container status \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": rpc error: code = NotFound desc = could not find container \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": container with ID starting with 5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435258 4860 scope.go:117] "RemoveContainer" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435560 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2"} err="failed to get container status \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": rpc error: code = NotFound desc = could not find container \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": container with ID starting with be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435589 4860 scope.go:117] "RemoveContainer" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435827 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd"} err="failed to get container status \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": rpc error: code = NotFound desc = could not find container \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": container with ID starting with 0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.435852 4860 scope.go:117] "RemoveContainer" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.436594 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9"} err="failed to get container status \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": rpc error: code = NotFound desc = could not find container \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": container with ID starting with 36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.436620 4860 scope.go:117] "RemoveContainer" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.436850 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227"} err="failed to get container status \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": rpc error: code = NotFound desc = could not find container \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": container with ID starting with 5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.436872 4860 scope.go:117] "RemoveContainer" containerID="be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.437308 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2"} err="failed to get container status \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": rpc error: code = NotFound desc = could not find container \"be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2\": container with ID starting with be3af02c423c748c7263f3d00a2cdc3025dbbe20f0b92a533e46697aaa46f3a2 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.437344 4860 scope.go:117] "RemoveContainer" containerID="0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.437812 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd"} err="failed to get container status \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": rpc error: code = NotFound desc = could not find container \"0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd\": container with ID starting with 0cdd7427e59cf5b32c89de0a128d62f5cd3acf9a25a6394753b512b0d71594cd not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.437840 4860 scope.go:117] "RemoveContainer" containerID="36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.438260 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9"} err="failed to get container status \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": rpc error: code = NotFound desc = could not find container \"36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9\": container with ID starting with 36bf5c849f7b7278f3c2610d50e8f4ba9b45f9ed4c7e31337deda2ebe04cfac9 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.438291 4860 scope.go:117] "RemoveContainer" containerID="5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.438830 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227"} err="failed to get container status \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": rpc error: code = NotFound desc = could not find container \"5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227\": container with ID starting with 5a3a4c3e2d590c2d4b77f4587546cc8ed0f1ca2f64e5fcfe1ee0a4c946684227 not found: ID does not exist" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.440194 4860 reconciler_common.go:293] "Volume detached for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.516312 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.650212 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr5ng\" (UniqueName: \"kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng\") pod \"695edaa1-d556-4a7c-bb54-fa518455069a\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.650518 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle\") pod \"695edaa1-d556-4a7c-bb54-fa518455069a\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.650575 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data\") pod \"695edaa1-d556-4a7c-bb54-fa518455069a\" (UID: \"695edaa1-d556-4a7c-bb54-fa518455069a\") " Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.662607 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng" (OuterVolumeSpecName: "kube-api-access-rr5ng") pod "695edaa1-d556-4a7c-bb54-fa518455069a" (UID: "695edaa1-d556-4a7c-bb54-fa518455069a"). InnerVolumeSpecName "kube-api-access-rr5ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.679827 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.680657 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.706275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695edaa1-d556-4a7c-bb54-fa518455069a" (UID: "695edaa1-d556-4a7c-bb54-fa518455069a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.719341 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data" (OuterVolumeSpecName: "config-data") pod "695edaa1-d556-4a7c-bb54-fa518455069a" (UID: "695edaa1-d556-4a7c-bb54-fa518455069a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.732162 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.732863 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695edaa1-d556-4a7c-bb54-fa518455069a" containerName="keystone-db-sync" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.732897 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="695edaa1-d556-4a7c-bb54-fa518455069a" containerName="keystone-db-sync" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.732949 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="config-reloader" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.732957 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="config-reloader" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.732987 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="thanos-sidecar" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.732994 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="thanos-sidecar" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.733019 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="prometheus" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733027 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="prometheus" Jan 21 21:31:14 crc kubenswrapper[4860]: E0121 21:31:14.733041 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="init-config-reloader" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733048 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="init-config-reloader" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733334 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="thanos-sidecar" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733374 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="prometheus" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733386 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e4581-4208-4131-94e2-e572ed382903" containerName="config-reloader" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.733397 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="695edaa1-d556-4a7c-bb54-fa518455069a" containerName="keystone-db-sync" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.735433 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.742013 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.743223 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.744143 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-9ssx9" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.749348 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.749624 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.749966 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.750398 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.750458 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.753793 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr5ng\" (UniqueName: \"kubernetes.io/projected/695edaa1-d556-4a7c-bb54-fa518455069a-kube-api-access-rr5ng\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.753823 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.753836 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695edaa1-d556-4a7c-bb54-fa518455069a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.758865 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.771686 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.855372 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.855476 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.855560 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.855922 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.856084 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgsq\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-kube-api-access-fhgsq\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.856292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.856842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.856908 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.856980 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.857267 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.857417 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.857481 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.857519 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959321 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959417 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959486 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959509 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959541 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959561 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959583 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959605 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959663 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959710 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.959742 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgsq\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-kube-api-access-fhgsq\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.961333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.961500 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.961748 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2b229e16-dd0c-4c98-b734-dbe3c20639aa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.964781 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.966024 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.966956 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.966982 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.967149 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.967451 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.969007 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b229e16-dd0c-4c98-b734-dbe3c20639aa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.969190 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.969221 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7fd7fc261f9c8bc632f5a76ba4441601341d00dd6bfb49c87553592a23d2ac9f/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.969678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b229e16-dd0c-4c98-b734-dbe3c20639aa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:14 crc kubenswrapper[4860]: I0121 21:31:14.980243 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgsq\" (UniqueName: \"kubernetes.io/projected/2b229e16-dd0c-4c98-b734-dbe3c20639aa-kube-api-access-fhgsq\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.001440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a58fc1d7-70b3-43e2-bc05-cba32d5dabfd\") pod \"prometheus-metric-storage-0\" (UID: \"2b229e16-dd0c-4c98-b734-dbe3c20639aa\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.068070 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.222267 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" event={"ID":"695edaa1-d556-4a7c-bb54-fa518455069a","Type":"ContainerDied","Data":"476daf42994dc673a9b80e118006d3e4e2023c27b9649d9e1e7f32062af83918"} Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.222610 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476daf42994dc673a9b80e118006d3e4e2023c27b9649d9e1e7f32062af83918" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.222691 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-8wdfp" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.430242 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.447680 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2pgtk"] Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.463165 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.466831 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.468303 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d22jf" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.471584 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.471781 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.472186 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2pgtk"] Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.473006 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.575377 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.575456 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.575703 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.575829 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4fh\" (UniqueName: \"kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.575979 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.576334 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.677860 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.678352 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.678391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.678427 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.678461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4fh\" (UniqueName: \"kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.678490 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.685017 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.691183 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.694744 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.704951 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.705754 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.713626 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4fh\" (UniqueName: \"kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh\") pod \"keystone-bootstrap-2pgtk\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.790159 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.792203 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.800808 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.801107 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.822606 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.881629 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.881695 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.881763 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.881793 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.882067 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.882230 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjdq\" (UniqueName: \"kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.882342 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.897769 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984458 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984557 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984689 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjdq\" (UniqueName: \"kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984793 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.984928 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.985074 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.985685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:15 crc kubenswrapper[4860]: I0121 21:31:15.986243 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:15.991220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:15.994675 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:15.995846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.005567 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.009471 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjdq\" (UniqueName: \"kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq\") pod \"ceilometer-0\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.113248 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.240595 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerStarted","Data":"9d1b222dda5c7a7f96f56868827e032434659598c4126ee37d50b126d9b15bcd"} Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.759253 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856e4581-4208-4131-94e2-e572ed382903" path="/var/lib/kubelet/pods/856e4581-4208-4131-94e2-e572ed382903/volumes" Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.843621 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2pgtk"] Jan 21 21:31:16 crc kubenswrapper[4860]: W0121 21:31:16.859343 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd1faa5_b4df_4d8f_8b4e_fbf7495814cd.slice/crio-157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6 WatchSource:0}: Error finding container 157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6: Status 404 returned error can't find the container with id 157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6 Jan 21 21:31:16 crc kubenswrapper[4860]: I0121 21:31:16.886559 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:16 crc kubenswrapper[4860]: W0121 21:31:16.897710 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f52304_1de4_4a45_8165_8799cdefd9a7.slice/crio-6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70 WatchSource:0}: Error finding container 6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70: Status 404 returned error can't find the container with id 6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70 Jan 21 21:31:17 crc kubenswrapper[4860]: I0121 21:31:17.250768 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" event={"ID":"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd","Type":"ContainerStarted","Data":"157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6"} Jan 21 21:31:17 crc kubenswrapper[4860]: I0121 21:31:17.252279 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerStarted","Data":"6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70"} Jan 21 21:31:18 crc kubenswrapper[4860]: I0121 21:31:18.193985 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:18 crc kubenswrapper[4860]: I0121 21:31:18.287220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" event={"ID":"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd","Type":"ContainerStarted","Data":"54294abff40347cc68e45f4a266bded0002980952cca7233863473b214adbc57"} Jan 21 21:31:18 crc kubenswrapper[4860]: I0121 21:31:18.309402 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" podStartSLOduration=3.309351402 podStartE2EDuration="3.309351402s" podCreationTimestamp="2026-01-21 21:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:31:18.305331437 +0000 UTC m=+1370.527509907" watchObservedRunningTime="2026-01-21 21:31:18.309351402 +0000 UTC m=+1370.531529892" Jan 21 21:31:19 crc kubenswrapper[4860]: I0121 21:31:19.312052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerStarted","Data":"9a6dc87178eb89c03abdcb085662bcf824521d608081bae47e6df3543f80e2e3"} Jan 21 21:31:22 crc kubenswrapper[4860]: I0121 21:31:22.341680 4860 generic.go:334] "Generic (PLEG): container finished" podID="afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" containerID="54294abff40347cc68e45f4a266bded0002980952cca7233863473b214adbc57" exitCode=0 Jan 21 21:31:22 crc kubenswrapper[4860]: I0121 21:31:22.341787 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" event={"ID":"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd","Type":"ContainerDied","Data":"54294abff40347cc68e45f4a266bded0002980952cca7233863473b214adbc57"} Jan 21 21:31:23 crc kubenswrapper[4860]: I0121 21:31:23.352821 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerStarted","Data":"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f"} Jan 21 21:31:23 crc kubenswrapper[4860]: I0121 21:31:23.975337 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.131925 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.132483 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.132671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.132733 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j4fh\" (UniqueName: \"kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.132775 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.132811 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts\") pod \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\" (UID: \"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd\") " Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.138208 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.138523 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts" (OuterVolumeSpecName: "scripts") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.138699 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.138717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh" (OuterVolumeSpecName: "kube-api-access-6j4fh") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "kube-api-access-6j4fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.157169 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data" (OuterVolumeSpecName: "config-data") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.162030 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" (UID: "afd1faa5-b4df-4d8f-8b4e-fbf7495814cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235479 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235527 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j4fh\" (UniqueName: \"kubernetes.io/projected/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-kube-api-access-6j4fh\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235539 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235549 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235564 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.235584 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.361037 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" event={"ID":"afd1faa5-b4df-4d8f-8b4e-fbf7495814cd","Type":"ContainerDied","Data":"157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6"} Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.361081 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157a350499d39d02806b4011cb99c6a3ee5c48a3969987f1e794e194ec4fa8b6" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.361134 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2pgtk" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.374128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerStarted","Data":"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3"} Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.591669 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2pgtk"] Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.591726 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2pgtk"] Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.614920 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-d868p"] Jan 21 21:31:24 crc kubenswrapper[4860]: E0121 21:31:24.615366 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" containerName="keystone-bootstrap" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.615384 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" containerName="keystone-bootstrap" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.615555 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" containerName="keystone-bootstrap" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.616364 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.620791 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.621383 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d22jf" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.623216 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.626843 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.627267 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.642155 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-d868p"] Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.720450 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.721239 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.721327 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.721978 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgxp\" (UniqueName: \"kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.722061 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.722111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.824652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.824776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.824829 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.825054 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgxp\" (UniqueName: \"kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.826592 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.826672 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.832098 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.833643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.834740 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.834755 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.839598 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.848862 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgxp\" (UniqueName: \"kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp\") pod \"keystone-bootstrap-d868p\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:24 crc kubenswrapper[4860]: I0121 21:31:24.937422 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:25 crc kubenswrapper[4860]: I0121 21:31:25.437542 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-d868p"] Jan 21 21:31:25 crc kubenswrapper[4860]: W0121 21:31:25.448351 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec5cc83_a75e_49ca_963c_423f2d6af9c1.slice/crio-8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278 WatchSource:0}: Error finding container 8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278: Status 404 returned error can't find the container with id 8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278 Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.405678 4860 generic.go:334] "Generic (PLEG): container finished" podID="2b229e16-dd0c-4c98-b734-dbe3c20639aa" containerID="9a6dc87178eb89c03abdcb085662bcf824521d608081bae47e6df3543f80e2e3" exitCode=0 Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.405809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerDied","Data":"9a6dc87178eb89c03abdcb085662bcf824521d608081bae47e6df3543f80e2e3"} Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.411105 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-d868p" event={"ID":"0ec5cc83-a75e-49ca-963c-423f2d6af9c1","Type":"ContainerStarted","Data":"91bd7c218c4efb95cad7bf25d6f32ec21b4dae0bbfe76973cd1c24818130132b"} Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.411161 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-d868p" event={"ID":"0ec5cc83-a75e-49ca-963c-423f2d6af9c1","Type":"ContainerStarted","Data":"8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278"} Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.493472 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-d868p" podStartSLOduration=2.493449682 podStartE2EDuration="2.493449682s" podCreationTimestamp="2026-01-21 21:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:31:26.485572296 +0000 UTC m=+1378.707750766" watchObservedRunningTime="2026-01-21 21:31:26.493449682 +0000 UTC m=+1378.715628152" Jan 21 21:31:26 crc kubenswrapper[4860]: I0121 21:31:26.594285 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd1faa5-b4df-4d8f-8b4e-fbf7495814cd" path="/var/lib/kubelet/pods/afd1faa5-b4df-4d8f-8b4e-fbf7495814cd/volumes" Jan 21 21:31:31 crc kubenswrapper[4860]: I0121 21:31:31.498638 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ec5cc83-a75e-49ca-963c-423f2d6af9c1" containerID="91bd7c218c4efb95cad7bf25d6f32ec21b4dae0bbfe76973cd1c24818130132b" exitCode=0 Jan 21 21:31:31 crc kubenswrapper[4860]: I0121 21:31:31.499003 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-d868p" event={"ID":"0ec5cc83-a75e-49ca-963c-423f2d6af9c1","Type":"ContainerDied","Data":"91bd7c218c4efb95cad7bf25d6f32ec21b4dae0bbfe76973cd1c24818130132b"} Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.508974 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerStarted","Data":"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85"} Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.512150 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerStarted","Data":"919844cf8221cdd85dad8ec858e75515383efa5808be5cfd466f0d1e84578161"} Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.819244 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.888699 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.888774 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.888818 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.888910 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.888967 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcgxp\" (UniqueName: \"kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.889031 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys\") pod \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\" (UID: \"0ec5cc83-a75e-49ca-963c-423f2d6af9c1\") " Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.907324 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts" (OuterVolumeSpecName: "scripts") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.907858 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.908579 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.912410 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp" (OuterVolumeSpecName: "kube-api-access-wcgxp") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "kube-api-access-wcgxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.923510 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.925354 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data" (OuterVolumeSpecName: "config-data") pod "0ec5cc83-a75e-49ca-963c-423f2d6af9c1" (UID: "0ec5cc83-a75e-49ca-963c-423f2d6af9c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991586 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991636 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991649 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991664 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991678 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcgxp\" (UniqueName: \"kubernetes.io/projected/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-kube-api-access-wcgxp\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:32 crc kubenswrapper[4860]: I0121 21:31:32.991694 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ec5cc83-a75e-49ca-963c-423f2d6af9c1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.526327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-d868p" event={"ID":"0ec5cc83-a75e-49ca-963c-423f2d6af9c1","Type":"ContainerDied","Data":"8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278"} Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.526878 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3f4d0e01037c05c6659f71bdd1fe4f2074d66a7ed6503380513742f6774278" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.526603 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-d868p" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.679084 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:31:33 crc kubenswrapper[4860]: E0121 21:31:33.679676 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec5cc83-a75e-49ca-963c-423f2d6af9c1" containerName="keystone-bootstrap" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.679703 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec5cc83-a75e-49ca-963c-423f2d6af9c1" containerName="keystone-bootstrap" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.679959 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec5cc83-a75e-49ca-963c-423f2d6af9c1" containerName="keystone-bootstrap" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.680800 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.685973 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.686264 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.686300 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-d22jf" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.686569 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.686749 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.687050 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.695547 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813601 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813661 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813711 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813756 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813860 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813906 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvrf\" (UniqueName: \"kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.813968 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916457 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916497 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916534 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvrf\" (UniqueName: \"kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.916707 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.926793 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.924383 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.929227 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.950247 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:33 crc kubenswrapper[4860]: I0121 21:31:33.950435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:34 crc kubenswrapper[4860]: I0121 21:31:34.049738 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:34 crc kubenswrapper[4860]: I0121 21:31:34.050391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.039946 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvrf\" (UniqueName: \"kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf\") pod \"keystone-85df5fbd4-9gdg7\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.078522 4860 trace.go:236] Trace[1809275796]: "Calculate volume metrics of persistence for pod watcher-kuttl-default/rabbitmq-notifications-server-0" (21-Jan-2026 21:31:35.462) (total time: 7616ms): Jan 21 21:31:43 crc kubenswrapper[4860]: Trace[1809275796]: [7.616285571s] [7.616285571s] END Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.306555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.830800 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.836981 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.847240 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.990847 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7np58\" (UniqueName: \"kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.990910 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:43 crc kubenswrapper[4860]: I0121 21:31:43.991076 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.092439 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7np58\" (UniqueName: \"kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.093012 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.093602 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.093681 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.094039 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.119501 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7np58\" (UniqueName: \"kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58\") pod \"redhat-operators-nhqvg\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:44 crc kubenswrapper[4860]: I0121 21:31:44.171205 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:31:45 crc kubenswrapper[4860]: I0121 21:31:45.681855 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerStarted","Data":"7a74fd9f2132cf6b7ea60f698b3e3dd658d4d1c0e484ff75cd066ac18b88d6c5"} Jan 21 21:31:51 crc kubenswrapper[4860]: E0121 21:31:51.602231 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 21 21:31:51 crc kubenswrapper[4860]: E0121 21:31:51.602766 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fjdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_watcher-kuttl-default(91f52304-1de4-4a45-8165-8799cdefd9a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 21:31:51 crc kubenswrapper[4860]: E0121 21:31:51.604055 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/ceilometer-0" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" Jan 21 21:31:51 crc kubenswrapper[4860]: I0121 21:31:51.766559 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-central-agent" containerID="cri-o://0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f" gracePeriod=30 Jan 21 21:31:51 crc kubenswrapper[4860]: I0121 21:31:51.767121 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-notification-agent" containerID="cri-o://90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3" gracePeriod=30 Jan 21 21:31:51 crc kubenswrapper[4860]: I0121 21:31:51.767144 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="sg-core" containerID="cri-o://a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85" gracePeriod=30 Jan 21 21:31:51 crc kubenswrapper[4860]: I0121 21:31:51.972439 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:31:51 crc kubenswrapper[4860]: W0121 21:31:51.975098 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90d47ce_27d6_4955_a529_5866f5ef4090.slice/crio-05936fb50441c98b1608c6355a1dd141ad096cefc674d091a85fa42eb26ae629 WatchSource:0}: Error finding container 05936fb50441c98b1608c6355a1dd141ad096cefc674d091a85fa42eb26ae629: Status 404 returned error can't find the container with id 05936fb50441c98b1608c6355a1dd141ad096cefc674d091a85fa42eb26ae629 Jan 21 21:31:52 crc kubenswrapper[4860]: W0121 21:31:52.102753 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda6edf2d_041a_4469_a456_cae342270655.slice/crio-e3a911bc39d9cfde74e0c33d42e4a48a95e16be60eb5fcd2a3c9437023203786 WatchSource:0}: Error finding container e3a911bc39d9cfde74e0c33d42e4a48a95e16be60eb5fcd2a3c9437023203786: Status 404 returned error can't find the container with id e3a911bc39d9cfde74e0c33d42e4a48a95e16be60eb5fcd2a3c9437023203786 Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.103927 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.776960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" event={"ID":"da6edf2d-041a-4469-a456-cae342270655","Type":"ContainerStarted","Data":"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.777072 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" event={"ID":"da6edf2d-041a-4469-a456-cae342270655","Type":"ContainerStarted","Data":"e3a911bc39d9cfde74e0c33d42e4a48a95e16be60eb5fcd2a3c9437023203786"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.777154 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.779390 4860 generic.go:334] "Generic (PLEG): container finished" podID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerID="0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff" exitCode=0 Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.779458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerDied","Data":"0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.779550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerStarted","Data":"05936fb50441c98b1608c6355a1dd141ad096cefc674d091a85fa42eb26ae629"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.785217 4860 generic.go:334] "Generic (PLEG): container finished" podID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerID="a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85" exitCode=2 Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.785241 4860 generic.go:334] "Generic (PLEG): container finished" podID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerID="0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f" exitCode=0 Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.785309 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerDied","Data":"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.785362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerDied","Data":"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.790477 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"2b229e16-dd0c-4c98-b734-dbe3c20639aa","Type":"ContainerStarted","Data":"2497811ce43e7f6f1283534728d74b5d1be568a1a5c34bbf5dff39d0ee9d8826"} Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.814044 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" podStartSLOduration=19.814018629 podStartE2EDuration="19.814018629s" podCreationTimestamp="2026-01-21 21:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:31:52.812231513 +0000 UTC m=+1405.034410003" watchObservedRunningTime="2026-01-21 21:31:52.814018629 +0000 UTC m=+1405.036197099" Jan 21 21:31:52 crc kubenswrapper[4860]: I0121 21:31:52.851779 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=38.851758966 podStartE2EDuration="38.851758966s" podCreationTimestamp="2026-01-21 21:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:31:52.847062879 +0000 UTC m=+1405.069241349" watchObservedRunningTime="2026-01-21 21:31:52.851758966 +0000 UTC m=+1405.073937446" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.416966 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.521887 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522014 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522049 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522123 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fjdq\" (UniqueName: \"kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522181 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522205 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.522222 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.523057 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.523213 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.528751 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq" (OuterVolumeSpecName: "kube-api-access-8fjdq") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "kube-api-access-8fjdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.529226 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts" (OuterVolumeSpecName: "scripts") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.560527 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: E0121 21:31:53.579741 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle podName:91f52304-1de4-4a45-8165-8799cdefd9a7 nodeName:}" failed. No retries permitted until 2026-01-21 21:31:54.079697229 +0000 UTC m=+1406.301875709 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7") : error deleting /var/lib/kubelet/pods/91f52304-1de4-4a45-8165-8799cdefd9a7/volume-subpaths: remove /var/lib/kubelet/pods/91f52304-1de4-4a45-8165-8799cdefd9a7/volume-subpaths: no such file or directory Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.584247 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data" (OuterVolumeSpecName: "config-data") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624103 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624146 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624156 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624163 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624176 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f52304-1de4-4a45-8165-8799cdefd9a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.624184 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fjdq\" (UniqueName: \"kubernetes.io/projected/91f52304-1de4-4a45-8165-8799cdefd9a7-kube-api-access-8fjdq\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.803096 4860 generic.go:334] "Generic (PLEG): container finished" podID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerID="90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3" exitCode=0 Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.803198 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.803199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerDied","Data":"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3"} Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.803302 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"91f52304-1de4-4a45-8165-8799cdefd9a7","Type":"ContainerDied","Data":"6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70"} Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.803345 4860 scope.go:117] "RemoveContainer" containerID="a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.825459 4860 scope.go:117] "RemoveContainer" containerID="90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.846511 4860 scope.go:117] "RemoveContainer" containerID="0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.870723 4860 scope.go:117] "RemoveContainer" containerID="a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85" Jan 21 21:31:53 crc kubenswrapper[4860]: E0121 21:31:53.871761 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85\": container with ID starting with a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85 not found: ID does not exist" containerID="a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.871833 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85"} err="failed to get container status \"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85\": rpc error: code = NotFound desc = could not find container \"a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85\": container with ID starting with a5d56e866f36382f083a10d5069f99796d1461e2d024048210ad9464808c2b85 not found: ID does not exist" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.871882 4860 scope.go:117] "RemoveContainer" containerID="90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3" Jan 21 21:31:53 crc kubenswrapper[4860]: E0121 21:31:53.872711 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3\": container with ID starting with 90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3 not found: ID does not exist" containerID="90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.872785 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3"} err="failed to get container status \"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3\": rpc error: code = NotFound desc = could not find container \"90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3\": container with ID starting with 90634af9678fd4b3cd63e964b5de185988197cf6da674799fa7172a9bb315af3 not found: ID does not exist" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.872818 4860 scope.go:117] "RemoveContainer" containerID="0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f" Jan 21 21:31:53 crc kubenswrapper[4860]: E0121 21:31:53.873272 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f\": container with ID starting with 0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f not found: ID does not exist" containerID="0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f" Jan 21 21:31:53 crc kubenswrapper[4860]: I0121 21:31:53.873322 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f"} err="failed to get container status \"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f\": rpc error: code = NotFound desc = could not find container \"0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f\": container with ID starting with 0385d837011c95caec7c7407fd6a513d23faf6b7eff3cb286770f1825e5ee29f not found: ID does not exist" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.133195 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") pod \"91f52304-1de4-4a45-8165-8799cdefd9a7\" (UID: \"91f52304-1de4-4a45-8165-8799cdefd9a7\") " Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.143234 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91f52304-1de4-4a45-8165-8799cdefd9a7" (UID: "91f52304-1de4-4a45-8165-8799cdefd9a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.236121 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f52304-1de4-4a45-8165-8799cdefd9a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:31:54 crc kubenswrapper[4860]: E0121 21:31:54.469788 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f52304_1de4_4a45_8165_8799cdefd9a7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f52304_1de4_4a45_8165_8799cdefd9a7.slice/crio-6f01e95381776c4a23f600030f765cb7c3b46b4a390a93cc9d8fb0f35b5bcd70\": RecentStats: unable to find data in memory cache]" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.583061 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.606761 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.633687 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:54 crc kubenswrapper[4860]: E0121 21:31:54.634454 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-notification-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634481 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-notification-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: E0121 21:31:54.634521 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-central-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634533 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-central-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: E0121 21:31:54.634564 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="sg-core" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634573 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="sg-core" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634788 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-notification-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634820 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="ceilometer-central-agent" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.634840 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" containerName="sg-core" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.637900 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.641828 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.642150 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.649762 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.747409 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.747482 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.747802 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.748006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.748282 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.748363 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.748450 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpr2\" (UniqueName: \"kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.814950 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerStarted","Data":"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd"} Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850115 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850286 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850314 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850340 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vpr2\" (UniqueName: \"kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850361 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.850973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.851487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.860032 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.860103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.860253 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.873353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.880657 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vpr2\" (UniqueName: \"kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2\") pod \"ceilometer-0\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:54 crc kubenswrapper[4860]: I0121 21:31:54.960249 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:31:55 crc kubenswrapper[4860]: I0121 21:31:55.068869 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:56.590717 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f52304-1de4-4a45-8165-8799cdefd9a7" path="/var/lib/kubelet/pods/91f52304-1de4-4a45-8165-8799cdefd9a7/volumes" Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:56.846891 4860 generic.go:334] "Generic (PLEG): container finished" podID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerID="9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd" exitCode=0 Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:56.846966 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerDied","Data":"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd"} Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:56.850315 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:58.536784 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:58.867898 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerStarted","Data":"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551"} Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:58.869728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerStarted","Data":"eb5a58859ab24ddd34a775a370b48620f99e9dc939c6ec29639178705b4548a9"} Jan 21 21:31:58 crc kubenswrapper[4860]: I0121 21:31:58.892794 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhqvg" podStartSLOduration=10.472723842 podStartE2EDuration="15.892759925s" podCreationTimestamp="2026-01-21 21:31:43 +0000 UTC" firstStartedPulling="2026-01-21 21:31:52.781556466 +0000 UTC m=+1405.003734946" lastFinishedPulling="2026-01-21 21:31:58.201592519 +0000 UTC m=+1410.423771029" observedRunningTime="2026-01-21 21:31:58.891023332 +0000 UTC m=+1411.113201822" watchObservedRunningTime="2026-01-21 21:31:58.892759925 +0000 UTC m=+1411.114938405" Jan 21 21:31:59 crc kubenswrapper[4860]: I0121 21:31:59.881761 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerStarted","Data":"3e7a2da82b1086dd54bf75068d2fe66a95b954c9aafc2f07edaf8a32330b8e11"} Jan 21 21:32:00 crc kubenswrapper[4860]: I0121 21:32:00.068336 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:32:00 crc kubenswrapper[4860]: I0121 21:32:00.075820 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:32:00 crc kubenswrapper[4860]: I0121 21:32:00.894108 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerStarted","Data":"d91bc29c54895b7616fd8cb76a8443974214c6fbd9f5f6c3633e41d062635b68"} Jan 21 21:32:00 crc kubenswrapper[4860]: I0121 21:32:00.894532 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerStarted","Data":"b318cea2d6973a218d7a1a5ef69ca2950a1f9f3357e0e2c37b352ab8de64a576"} Jan 21 21:32:00 crc kubenswrapper[4860]: I0121 21:32:00.901502 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 21 21:32:02 crc kubenswrapper[4860]: I0121 21:32:02.929713 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerStarted","Data":"efc8cc649a296904735db21ba4a3bc4ace15dec9ce7c08d4374fc750f7f1d922"} Jan 21 21:32:02 crc kubenswrapper[4860]: I0121 21:32:02.930279 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:02 crc kubenswrapper[4860]: I0121 21:32:02.958423 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=5.812957165 podStartE2EDuration="8.958399247s" podCreationTimestamp="2026-01-21 21:31:54 +0000 UTC" firstStartedPulling="2026-01-21 21:31:58.547960621 +0000 UTC m=+1410.770139091" lastFinishedPulling="2026-01-21 21:32:01.693402703 +0000 UTC m=+1413.915581173" observedRunningTime="2026-01-21 21:32:02.954198255 +0000 UTC m=+1415.176376725" watchObservedRunningTime="2026-01-21 21:32:02.958399247 +0000 UTC m=+1415.180577707" Jan 21 21:32:04 crc kubenswrapper[4860]: I0121 21:32:04.171303 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:04 crc kubenswrapper[4860]: I0121 21:32:04.171716 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:05 crc kubenswrapper[4860]: I0121 21:32:05.218192 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nhqvg" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="registry-server" probeResult="failure" output=< Jan 21 21:32:05 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:32:05 crc kubenswrapper[4860]: > Jan 21 21:32:14 crc kubenswrapper[4860]: I0121 21:32:14.230094 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:14 crc kubenswrapper[4860]: I0121 21:32:14.297278 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:15 crc kubenswrapper[4860]: I0121 21:32:15.216803 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:32:16 crc kubenswrapper[4860]: I0121 21:32:16.984624 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:16 crc kubenswrapper[4860]: I0121 21:32:16.987303 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:16 crc kubenswrapper[4860]: I0121 21:32:16.994447 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Jan 21 21:32:16 crc kubenswrapper[4860]: I0121 21:32:16.994482 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Jan 21 21:32:16 crc kubenswrapper[4860]: I0121 21:32:16.994692 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-q7zpw" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.009099 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.179003 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.179077 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.179180 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vls\" (UniqueName: \"kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.179291 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.250998 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:17 crc kubenswrapper[4860]: E0121 21:32:17.251720 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-w2vls openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/openstackclient" podUID="ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.263457 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.289338 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vls\" (UniqueName: \"kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.289447 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.289515 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.289554 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.290576 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: E0121 21:32:17.292259 4860 projected.go:194] Error preparing data for projected volume kube-api-access-w2vls for pod watcher-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ee5a5a84-4147-4721-8c33-44cb6c9c3a0c) does not match the UID in record. The object might have been deleted and then recreated Jan 21 21:32:17 crc kubenswrapper[4860]: E0121 21:32:17.292335 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls podName:ee5a5a84-4147-4721-8c33-44cb6c9c3a0c nodeName:}" failed. No retries permitted until 2026-01-21 21:32:17.792312238 +0000 UTC m=+1430.014490708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w2vls" (UniqueName: "kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls") pod "openstackclient" (UID: "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ee5a5a84-4147-4721-8c33-44cb6c9c3a0c) does not match the UID in record. The object might have been deleted and then recreated Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.298389 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.298569 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.299946 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.312887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.323986 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.391838 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.391969 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.392061 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.392112 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8rg\" (UniqueName: \"kubernetes.io/projected/1696b722-1339-4636-99ca-32f9276ca7db-kube-api-access-qk8rg\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.407359 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.407715 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhqvg" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="registry-server" containerID="cri-o://6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551" gracePeriod=2 Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.493741 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.493828 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.493880 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.493919 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8rg\" (UniqueName: \"kubernetes.io/projected/1696b722-1339-4636-99ca-32f9276ca7db-kube-api-access-qk8rg\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.495801 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.498511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.502532 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1696b722-1339-4636-99ca-32f9276ca7db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.514030 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8rg\" (UniqueName: \"kubernetes.io/projected/1696b722-1339-4636-99ca-32f9276ca7db-kube-api-access-qk8rg\") pod \"openstackclient\" (UID: \"1696b722-1339-4636-99ca-32f9276ca7db\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.695913 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.807242 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vls\" (UniqueName: \"kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls\") pod \"openstackclient\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:17 crc kubenswrapper[4860]: E0121 21:32:17.827548 4860 projected.go:194] Error preparing data for projected volume kube-api-access-w2vls for pod watcher-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ee5a5a84-4147-4721-8c33-44cb6c9c3a0c) does not match the UID in record. The object might have been deleted and then recreated Jan 21 21:32:17 crc kubenswrapper[4860]: E0121 21:32:17.827657 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls podName:ee5a5a84-4147-4721-8c33-44cb6c9c3a0c nodeName:}" failed. No retries permitted until 2026-01-21 21:32:18.827629941 +0000 UTC m=+1431.049808411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2vls" (UniqueName: "kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls") pod "openstackclient" (UID: "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ee5a5a84-4147-4721-8c33-44cb6c9c3a0c) does not match the UID in record. The object might have been deleted and then recreated Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.870071 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.909699 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities\") pod \"c90d47ce-27d6-4955-a529-5866f5ef4090\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.909856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7np58\" (UniqueName: \"kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58\") pod \"c90d47ce-27d6-4955-a529-5866f5ef4090\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.909889 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content\") pod \"c90d47ce-27d6-4955-a529-5866f5ef4090\" (UID: \"c90d47ce-27d6-4955-a529-5866f5ef4090\") " Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.916870 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities" (OuterVolumeSpecName: "utilities") pod "c90d47ce-27d6-4955-a529-5866f5ef4090" (UID: "c90d47ce-27d6-4955-a529-5866f5ef4090"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:32:17 crc kubenswrapper[4860]: I0121 21:32:17.927351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58" (OuterVolumeSpecName: "kube-api-access-7np58") pod "c90d47ce-27d6-4955-a529-5866f5ef4090" (UID: "c90d47ce-27d6-4955-a529-5866f5ef4090"). InnerVolumeSpecName "kube-api-access-7np58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.015400 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7np58\" (UniqueName: \"kubernetes.io/projected/c90d47ce-27d6-4955-a529-5866f5ef4090-kube-api-access-7np58\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.016027 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.102981 4860 generic.go:334] "Generic (PLEG): container finished" podID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerID="6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551" exitCode=0 Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.103062 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.103323 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhqvg" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.103386 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerDied","Data":"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551"} Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.103474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhqvg" event={"ID":"c90d47ce-27d6-4955-a529-5866f5ef4090","Type":"ContainerDied","Data":"05936fb50441c98b1608c6355a1dd141ad096cefc674d091a85fa42eb26ae629"} Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.103503 4860 scope.go:117] "RemoveContainer" containerID="6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.137928 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" podUID="1696b722-1339-4636-99ca-32f9276ca7db" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.147486 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.153656 4860 scope.go:117] "RemoveContainer" containerID="9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.162295 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c90d47ce-27d6-4955-a529-5866f5ef4090" (UID: "c90d47ce-27d6-4955-a529-5866f5ef4090"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.202880 4860 scope.go:117] "RemoveContainer" containerID="0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.220821 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90d47ce-27d6-4955-a529-5866f5ef4090-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.234005 4860 scope.go:117] "RemoveContainer" containerID="6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551" Jan 21 21:32:18 crc kubenswrapper[4860]: E0121 21:32:18.235910 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551\": container with ID starting with 6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551 not found: ID does not exist" containerID="6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.235991 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551"} err="failed to get container status \"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551\": rpc error: code = NotFound desc = could not find container \"6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551\": container with ID starting with 6c22e7f6f5b188aaeb962eb1d7a30cdaf528244863ccfc1b8c420f2fabb5f551 not found: ID does not exist" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.236028 4860 scope.go:117] "RemoveContainer" containerID="9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd" Jan 21 21:32:18 crc kubenswrapper[4860]: E0121 21:32:18.236545 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd\": container with ID starting with 9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd not found: ID does not exist" containerID="9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.236610 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd"} err="failed to get container status \"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd\": rpc error: code = NotFound desc = could not find container \"9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd\": container with ID starting with 9a1cbfc692654ee1deea288669b485694847829f2a67d803ff9c36698cac61cd not found: ID does not exist" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.236652 4860 scope.go:117] "RemoveContainer" containerID="0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff" Jan 21 21:32:18 crc kubenswrapper[4860]: E0121 21:32:18.236962 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff\": container with ID starting with 0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff not found: ID does not exist" containerID="0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.236990 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff"} err="failed to get container status \"0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff\": rpc error: code = NotFound desc = could not find container \"0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff\": container with ID starting with 0bdb1b2073b9cc214bf3cd000ff1215f6a103f7d65238a2fdfe88a94c94e5cff not found: ID does not exist" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.322328 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret\") pod \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.322542 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config\") pod \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.322574 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle\") pod \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\" (UID: \"ee5a5a84-4147-4721-8c33-44cb6c9c3a0c\") " Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.322912 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vls\" (UniqueName: \"kubernetes.io/projected/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-kube-api-access-w2vls\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.324061 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" (UID: "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.327943 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" (UID: "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.327995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" (UID: "ee5a5a84-4147-4721-8c33-44cb6c9c3a0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.427252 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.427304 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.427317 4860 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.430322 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 21 21:32:18 crc kubenswrapper[4860]: W0121 21:32:18.442798 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1696b722_1339_4636_99ca_32f9276ca7db.slice/crio-a7c376e83d1ae5996b223134934afb6a7d06e5a5787cdf97a77be8e207f08fd8 WatchSource:0}: Error finding container a7c376e83d1ae5996b223134934afb6a7d06e5a5787cdf97a77be8e207f08fd8: Status 404 returned error can't find the container with id a7c376e83d1ae5996b223134934afb6a7d06e5a5787cdf97a77be8e207f08fd8 Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.458655 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.469462 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhqvg"] Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.589699 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" path="/var/lib/kubelet/pods/c90d47ce-27d6-4955-a529-5866f5ef4090/volumes" Jan 21 21:32:18 crc kubenswrapper[4860]: I0121 21:32:18.590762 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" path="/var/lib/kubelet/pods/ee5a5a84-4147-4721-8c33-44cb6c9c3a0c/volumes" Jan 21 21:32:19 crc kubenswrapper[4860]: I0121 21:32:19.118156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"1696b722-1339-4636-99ca-32f9276ca7db","Type":"ContainerStarted","Data":"a7c376e83d1ae5996b223134934afb6a7d06e5a5787cdf97a77be8e207f08fd8"} Jan 21 21:32:19 crc kubenswrapper[4860]: I0121 21:32:19.118198 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 21 21:32:19 crc kubenswrapper[4860]: I0121 21:32:19.130197 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="ee5a5a84-4147-4721-8c33-44cb6c9c3a0c" podUID="1696b722-1339-4636-99ca-32f9276ca7db" Jan 21 21:32:24 crc kubenswrapper[4860]: I0121 21:32:24.971752 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:28 crc kubenswrapper[4860]: I0121 21:32:28.805297 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:28 crc kubenswrapper[4860]: I0121 21:32:28.805910 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" containerName="kube-state-metrics" containerID="cri-o://f693a81a1e0ac6d092dd33ba32dc026fe00fda900afa7aa5565f4d896c9c9e85" gracePeriod=30 Jan 21 21:32:29 crc kubenswrapper[4860]: I0121 21:32:29.516450 4860 generic.go:334] "Generic (PLEG): container finished" podID="7e3962c5-1406-4ba2-8183-f001ebb09796" containerID="f693a81a1e0ac6d092dd33ba32dc026fe00fda900afa7aa5565f4d896c9c9e85" exitCode=2 Jan 21 21:32:29 crc kubenswrapper[4860]: I0121 21:32:29.516510 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"7e3962c5-1406-4ba2-8183-f001ebb09796","Type":"ContainerDied","Data":"f693a81a1e0ac6d092dd33ba32dc026fe00fda900afa7aa5565f4d896c9c9e85"} Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.202023 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.202413 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-central-agent" containerID="cri-o://3e7a2da82b1086dd54bf75068d2fe66a95b954c9aafc2f07edaf8a32330b8e11" gracePeriod=30 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.203025 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="proxy-httpd" containerID="cri-o://efc8cc649a296904735db21ba4a3bc4ace15dec9ce7c08d4374fc750f7f1d922" gracePeriod=30 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.203103 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="sg-core" containerID="cri-o://d91bc29c54895b7616fd8cb76a8443974214c6fbd9f5f6c3633e41d062635b68" gracePeriod=30 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.203154 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-notification-agent" containerID="cri-o://b318cea2d6973a218d7a1a5ef69ca2950a1f9f3357e0e2c37b352ab8de64a576" gracePeriod=30 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.529900 4860 generic.go:334] "Generic (PLEG): container finished" podID="9001a854-6f86-4aae-8882-726263d2ac8c" containerID="efc8cc649a296904735db21ba4a3bc4ace15dec9ce7c08d4374fc750f7f1d922" exitCode=0 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.529966 4860 generic.go:334] "Generic (PLEG): container finished" podID="9001a854-6f86-4aae-8882-726263d2ac8c" containerID="d91bc29c54895b7616fd8cb76a8443974214c6fbd9f5f6c3633e41d062635b68" exitCode=2 Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.529985 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerDied","Data":"efc8cc649a296904735db21ba4a3bc4ace15dec9ce7c08d4374fc750f7f1d922"} Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.530037 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerDied","Data":"d91bc29c54895b7616fd8cb76a8443974214c6fbd9f5f6c3633e41d062635b68"} Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.919824 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.992809 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc5hk\" (UniqueName: \"kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk\") pod \"7e3962c5-1406-4ba2-8183-f001ebb09796\" (UID: \"7e3962c5-1406-4ba2-8183-f001ebb09796\") " Jan 21 21:32:30 crc kubenswrapper[4860]: I0121 21:32:30.996700 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk" (OuterVolumeSpecName: "kube-api-access-zc5hk") pod "7e3962c5-1406-4ba2-8183-f001ebb09796" (UID: "7e3962c5-1406-4ba2-8183-f001ebb09796"). InnerVolumeSpecName "kube-api-access-zc5hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.095539 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc5hk\" (UniqueName: \"kubernetes.io/projected/7e3962c5-1406-4ba2-8183-f001ebb09796-kube-api-access-zc5hk\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.585021 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"1696b722-1339-4636-99ca-32f9276ca7db","Type":"ContainerStarted","Data":"c862cbf4420ac8745def0f244c01ef1773759dec8ff2e938ce68b8d4ca3e33f8"} Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.596814 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"7e3962c5-1406-4ba2-8183-f001ebb09796","Type":"ContainerDied","Data":"b28869fa5824b9cc0386f007f983c774c996211d185347b621d327c23fd1e7b9"} Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.596975 4860 scope.go:117] "RemoveContainer" containerID="f693a81a1e0ac6d092dd33ba32dc026fe00fda900afa7aa5565f4d896c9c9e85" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.597399 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.622438 4860 generic.go:334] "Generic (PLEG): container finished" podID="9001a854-6f86-4aae-8882-726263d2ac8c" containerID="3e7a2da82b1086dd54bf75068d2fe66a95b954c9aafc2f07edaf8a32330b8e11" exitCode=0 Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.622508 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerDied","Data":"3e7a2da82b1086dd54bf75068d2fe66a95b954c9aafc2f07edaf8a32330b8e11"} Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.625403 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.400713587 podStartE2EDuration="14.625360846s" podCreationTimestamp="2026-01-21 21:32:17 +0000 UTC" firstStartedPulling="2026-01-21 21:32:18.445744286 +0000 UTC m=+1430.667922766" lastFinishedPulling="2026-01-21 21:32:30.670391555 +0000 UTC m=+1442.892570025" observedRunningTime="2026-01-21 21:32:31.62135784 +0000 UTC m=+1443.843536310" watchObservedRunningTime="2026-01-21 21:32:31.625360846 +0000 UTC m=+1443.847539316" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.673345 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.686572 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.716323 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:31 crc kubenswrapper[4860]: E0121 21:32:31.716799 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="extract-utilities" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.716817 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="extract-utilities" Jan 21 21:32:31 crc kubenswrapper[4860]: E0121 21:32:31.716842 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="extract-content" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.716848 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="extract-content" Jan 21 21:32:31 crc kubenswrapper[4860]: E0121 21:32:31.716872 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="registry-server" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.716880 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="registry-server" Jan 21 21:32:31 crc kubenswrapper[4860]: E0121 21:32:31.716889 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" containerName="kube-state-metrics" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.716895 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" containerName="kube-state-metrics" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.717077 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" containerName="kube-state-metrics" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.717090 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90d47ce-27d6-4955-a529-5866f5ef4090" containerName="registry-server" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.718048 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.726416 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.726417 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.761016 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.916579 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.916728 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ck2\" (UniqueName: \"kubernetes.io/projected/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-api-access-79ck2\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.916855 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:31 crc kubenswrapper[4860]: I0121 21:32:31.916960 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.018501 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ck2\" (UniqueName: \"kubernetes.io/projected/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-api-access-79ck2\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.018609 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.018679 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.018737 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.037733 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.038194 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.038510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.052523 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ck2\" (UniqueName: \"kubernetes.io/projected/28efe2fc-3f49-48b8-91f3-29b7a2d6879e-kube-api-access-79ck2\") pod \"kube-state-metrics-0\" (UID: \"28efe2fc-3f49-48b8-91f3-29b7a2d6879e\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.340945 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.596741 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3962c5-1406-4ba2-8183-f001ebb09796" path="/var/lib/kubelet/pods/7e3962c5-1406-4ba2-8183-f001ebb09796/volumes" Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.640433 4860 generic.go:334] "Generic (PLEG): container finished" podID="9001a854-6f86-4aae-8882-726263d2ac8c" containerID="b318cea2d6973a218d7a1a5ef69ca2950a1f9f3357e0e2c37b352ab8de64a576" exitCode=0 Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.641796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerDied","Data":"b318cea2d6973a218d7a1a5ef69ca2950a1f9f3357e0e2c37b352ab8de64a576"} Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.740454 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 21 21:32:32 crc kubenswrapper[4860]: I0121 21:32:32.850162 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.042524 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.042950 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043079 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043241 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vpr2\" (UniqueName: \"kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043335 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043836 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043921 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data\") pod \"9001a854-6f86-4aae-8882-726263d2ac8c\" (UID: \"9001a854-6f86-4aae-8882-726263d2ac8c\") " Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.043414 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.044853 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.050127 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts" (OuterVolumeSpecName: "scripts") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.050227 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2" (OuterVolumeSpecName: "kube-api-access-9vpr2") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "kube-api-access-9vpr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.069648 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.118439 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.168383 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data" (OuterVolumeSpecName: "config-data") pod "9001a854-6f86-4aae-8882-726263d2ac8c" (UID: "9001a854-6f86-4aae-8882-726263d2ac8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.168869 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.168914 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vpr2\" (UniqueName: \"kubernetes.io/projected/9001a854-6f86-4aae-8882-726263d2ac8c-kube-api-access-9vpr2\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.168980 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.169001 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.169010 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.169019 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001a854-6f86-4aae-8882-726263d2ac8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.169052 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9001a854-6f86-4aae-8882-726263d2ac8c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.837790 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"28efe2fc-3f49-48b8-91f3-29b7a2d6879e","Type":"ContainerStarted","Data":"2168173e04e359fd981a1065f351860915a372f82095e7a00088db8f8afe3bf6"} Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.910346 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9001a854-6f86-4aae-8882-726263d2ac8c","Type":"ContainerDied","Data":"eb5a58859ab24ddd34a775a370b48620f99e9dc939c6ec29639178705b4548a9"} Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.910552 4860 scope.go:117] "RemoveContainer" containerID="efc8cc649a296904735db21ba4a3bc4ace15dec9ce7c08d4374fc750f7f1d922" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.910880 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.971476 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:33 crc kubenswrapper[4860]: I0121 21:32:33.991202 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.031269 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:34 crc kubenswrapper[4860]: E0121 21:32:34.031796 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-notification-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.031821 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-notification-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: E0121 21:32:34.031843 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-central-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.031851 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-central-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: E0121 21:32:34.031871 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="proxy-httpd" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.031878 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="proxy-httpd" Jan 21 21:32:34 crc kubenswrapper[4860]: E0121 21:32:34.031900 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="sg-core" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.031906 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="sg-core" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.032134 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-central-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.032157 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="ceilometer-notification-agent" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.032184 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="proxy-httpd" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.032195 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" containerName="sg-core" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.040789 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.042508 4860 scope.go:117] "RemoveContainer" containerID="d91bc29c54895b7616fd8cb76a8443974214c6fbd9f5f6c3633e41d062635b68" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.044009 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.044360 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.044736 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.051330 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.075267 4860 scope.go:117] "RemoveContainer" containerID="b318cea2d6973a218d7a1a5ef69ca2950a1f9f3357e0e2c37b352ab8de64a576" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.116069 4860 scope.go:117] "RemoveContainer" containerID="3e7a2da82b1086dd54bf75068d2fe66a95b954c9aafc2f07edaf8a32330b8e11" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177612 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177732 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177777 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177869 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.177992 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.178021 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgp2\" (UniqueName: \"kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.178108 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279360 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgp2\" (UniqueName: \"kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279504 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279551 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279661 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.279744 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.281464 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.282102 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.284529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.284164 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.294513 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.299615 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.303576 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.306565 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgp2\" (UniqueName: \"kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2\") pod \"ceilometer-0\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.362255 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.602340 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9001a854-6f86-4aae-8882-726263d2ac8c" path="/var/lib/kubelet/pods/9001a854-6f86-4aae-8882-726263d2ac8c/volumes" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.921049 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"28efe2fc-3f49-48b8-91f3-29b7a2d6879e","Type":"ContainerStarted","Data":"830739bdbe6c1b94cac59e70b4bceba2e990b066675f9e27ecc0d4dce1c909d0"} Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.923120 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.937376 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:32:34 crc kubenswrapper[4860]: I0121 21:32:34.974803 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.574744378 podStartE2EDuration="3.97478127s" podCreationTimestamp="2026-01-21 21:32:31 +0000 UTC" firstStartedPulling="2026-01-21 21:32:32.772025891 +0000 UTC m=+1444.994204361" lastFinishedPulling="2026-01-21 21:32:34.172062783 +0000 UTC m=+1446.394241253" observedRunningTime="2026-01-21 21:32:34.968451043 +0000 UTC m=+1447.190629553" watchObservedRunningTime="2026-01-21 21:32:34.97478127 +0000 UTC m=+1447.196959740" Jan 21 21:32:35 crc kubenswrapper[4860]: I0121 21:32:35.947418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerStarted","Data":"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d"} Jan 21 21:32:35 crc kubenswrapper[4860]: I0121 21:32:35.947959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerStarted","Data":"a362986d7394c0f5fd2ab9342dff171d173379f6295bf0ef845ae03cc0c36e32"} Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.310144 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-xlmw8"] Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.311703 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.320493 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xlmw8"] Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.413670 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-55db-account-create-update-6xm4r"] Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.421392 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.422994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfx9\" (UniqueName: \"kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.423356 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.426351 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.433526 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-55db-account-create-update-6xm4r"] Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.524595 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfx9\" (UniqueName: \"kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.525633 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sx87\" (UniqueName: \"kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.525866 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.526668 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.527877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.548882 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfx9\" (UniqueName: \"kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9\") pod \"watcher-db-create-xlmw8\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.629204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sx87\" (UniqueName: \"kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.629467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.630597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.650585 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sx87\" (UniqueName: \"kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87\") pod \"watcher-55db-account-create-update-6xm4r\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.721511 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.760207 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:36 crc kubenswrapper[4860]: I0121 21:32:36.974042 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerStarted","Data":"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b"} Jan 21 21:32:37 crc kubenswrapper[4860]: I0121 21:32:37.073043 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xlmw8"] Jan 21 21:32:37 crc kubenswrapper[4860]: I0121 21:32:37.392158 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-55db-account-create-update-6xm4r"] Jan 21 21:32:37 crc kubenswrapper[4860]: W0121 21:32:37.397139 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfea08dc1_90f2_4d48_844e_a4eb915e2470.slice/crio-e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f WatchSource:0}: Error finding container e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f: Status 404 returned error can't find the container with id e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.137984 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerStarted","Data":"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00"} Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.149171 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" event={"ID":"fea08dc1-90f2-4d48-844e-a4eb915e2470","Type":"ContainerStarted","Data":"c8d0df3e3bc86d46d44ef7633ba86773c1b75930aa8b3b363f80c7b1015f16b9"} Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.149260 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" event={"ID":"fea08dc1-90f2-4d48-844e-a4eb915e2470","Type":"ContainerStarted","Data":"e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f"} Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.162478 4860 generic.go:334] "Generic (PLEG): container finished" podID="50bb1894-3b38-44ab-b3cf-bf2e334673b4" containerID="3239fcd0150afce419796a6fda8adf4ac71a4dc43a117cef2ced29c08aa29aeb" exitCode=0 Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.162552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xlmw8" event={"ID":"50bb1894-3b38-44ab-b3cf-bf2e334673b4","Type":"ContainerDied","Data":"3239fcd0150afce419796a6fda8adf4ac71a4dc43a117cef2ced29c08aa29aeb"} Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.162588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xlmw8" event={"ID":"50bb1894-3b38-44ab-b3cf-bf2e334673b4","Type":"ContainerStarted","Data":"063c8b7cdda6dfb126b0513baa48c3345928d1d02cd156ad1dbab834198e74e9"} Jan 21 21:32:38 crc kubenswrapper[4860]: I0121 21:32:38.186199 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" podStartSLOduration=2.186169118 podStartE2EDuration="2.186169118s" podCreationTimestamp="2026-01-21 21:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:32:38.174301588 +0000 UTC m=+1450.396480058" watchObservedRunningTime="2026-01-21 21:32:38.186169118 +0000 UTC m=+1450.408347608" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.185836 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerStarted","Data":"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28"} Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.186220 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.187913 4860 generic.go:334] "Generic (PLEG): container finished" podID="fea08dc1-90f2-4d48-844e-a4eb915e2470" containerID="c8d0df3e3bc86d46d44ef7633ba86773c1b75930aa8b3b363f80c7b1015f16b9" exitCode=0 Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.188235 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" event={"ID":"fea08dc1-90f2-4d48-844e-a4eb915e2470","Type":"ContainerDied","Data":"c8d0df3e3bc86d46d44ef7633ba86773c1b75930aa8b3b363f80c7b1015f16b9"} Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.209502 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.255086969 podStartE2EDuration="6.209478226s" podCreationTimestamp="2026-01-21 21:32:33 +0000 UTC" firstStartedPulling="2026-01-21 21:32:34.94566145 +0000 UTC m=+1447.167839920" lastFinishedPulling="2026-01-21 21:32:38.900052707 +0000 UTC m=+1451.122231177" observedRunningTime="2026-01-21 21:32:39.209201628 +0000 UTC m=+1451.431380118" watchObservedRunningTime="2026-01-21 21:32:39.209478226 +0000 UTC m=+1451.431656696" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.548295 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.750087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts\") pod \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.750553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfx9\" (UniqueName: \"kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9\") pod \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\" (UID: \"50bb1894-3b38-44ab-b3cf-bf2e334673b4\") " Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.751187 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50bb1894-3b38-44ab-b3cf-bf2e334673b4" (UID: "50bb1894-3b38-44ab-b3cf-bf2e334673b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.751837 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50bb1894-3b38-44ab-b3cf-bf2e334673b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.760273 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9" (OuterVolumeSpecName: "kube-api-access-9mfx9") pod "50bb1894-3b38-44ab-b3cf-bf2e334673b4" (UID: "50bb1894-3b38-44ab-b3cf-bf2e334673b4"). InnerVolumeSpecName "kube-api-access-9mfx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:32:39 crc kubenswrapper[4860]: I0121 21:32:39.853783 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfx9\" (UniqueName: \"kubernetes.io/projected/50bb1894-3b38-44ab-b3cf-bf2e334673b4-kube-api-access-9mfx9\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.289971 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xlmw8" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.292005 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xlmw8" event={"ID":"50bb1894-3b38-44ab-b3cf-bf2e334673b4","Type":"ContainerDied","Data":"063c8b7cdda6dfb126b0513baa48c3345928d1d02cd156ad1dbab834198e74e9"} Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.292038 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063c8b7cdda6dfb126b0513baa48c3345928d1d02cd156ad1dbab834198e74e9" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.644552 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.684274 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts\") pod \"fea08dc1-90f2-4d48-844e-a4eb915e2470\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.684349 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sx87\" (UniqueName: \"kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87\") pod \"fea08dc1-90f2-4d48-844e-a4eb915e2470\" (UID: \"fea08dc1-90f2-4d48-844e-a4eb915e2470\") " Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.684847 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fea08dc1-90f2-4d48-844e-a4eb915e2470" (UID: "fea08dc1-90f2-4d48-844e-a4eb915e2470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.688498 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87" (OuterVolumeSpecName: "kube-api-access-9sx87") pod "fea08dc1-90f2-4d48-844e-a4eb915e2470" (UID: "fea08dc1-90f2-4d48-844e-a4eb915e2470"). InnerVolumeSpecName "kube-api-access-9sx87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.786493 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sx87\" (UniqueName: \"kubernetes.io/projected/fea08dc1-90f2-4d48-844e-a4eb915e2470-kube-api-access-9sx87\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:40 crc kubenswrapper[4860]: I0121 21:32:40.786550 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea08dc1-90f2-4d48-844e-a4eb915e2470-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.299155 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" event={"ID":"fea08dc1-90f2-4d48-844e-a4eb915e2470","Type":"ContainerDied","Data":"e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f"} Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.299239 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1348a7fa4085c5615dbb0626a43c5631d8438fdbafb2e0f22e7a496e07ac38f" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.299272 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-55db-account-create-update-6xm4r" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.805495 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-57xwt"] Jan 21 21:32:41 crc kubenswrapper[4860]: E0121 21:32:41.806534 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea08dc1-90f2-4d48-844e-a4eb915e2470" containerName="mariadb-account-create-update" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.806556 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea08dc1-90f2-4d48-844e-a4eb915e2470" containerName="mariadb-account-create-update" Jan 21 21:32:41 crc kubenswrapper[4860]: E0121 21:32:41.806607 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bb1894-3b38-44ab-b3cf-bf2e334673b4" containerName="mariadb-database-create" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.806619 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bb1894-3b38-44ab-b3cf-bf2e334673b4" containerName="mariadb-database-create" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.806864 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="50bb1894-3b38-44ab-b3cf-bf2e334673b4" containerName="mariadb-database-create" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.806898 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea08dc1-90f2-4d48-844e-a4eb915e2470" containerName="mariadb-account-create-update" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.809733 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.812992 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.813199 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-k9bkl" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.817422 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-57xwt"] Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.906790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwr6\" (UniqueName: \"kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.907497 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.907688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:41 crc kubenswrapper[4860]: I0121 21:32:41.907831 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.008541 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.008671 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwr6\" (UniqueName: \"kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.008723 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.008768 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.014922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.015070 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.018755 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.030891 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwr6\" (UniqueName: \"kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6\") pod \"watcher-kuttl-db-sync-57xwt\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.133533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.491588 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 21 21:32:42 crc kubenswrapper[4860]: I0121 21:32:42.969251 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-57xwt"] Jan 21 21:32:43 crc kubenswrapper[4860]: I0121 21:32:43.538122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" event={"ID":"d422ad12-0f54-467d-a449-3bdb5867c028","Type":"ContainerStarted","Data":"f3b0d828927b020529f6ea5e6b0f71257c8676d12f40e5be08c9ccd420bb0cad"} Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.712797 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/crc-debug-zr7qh"] Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.716586 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.719654 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xhwmg" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.859605 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.860097 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdvl\" (UniqueName: \"kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.961105 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.961237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdvl\" (UniqueName: \"kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.961254 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:48 crc kubenswrapper[4860]: I0121 21:32:48.995849 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdvl\" (UniqueName: \"kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl\") pod \"crc-debug-zr7qh\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:32:49 crc kubenswrapper[4860]: I0121 21:32:49.080708 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:33:00 crc kubenswrapper[4860]: W0121 21:33:00.563665 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda28ae956_41bc_4160_8edc_f40247e5612d.slice/crio-8584d0ce629c841827cb8034248168b5e4642e600cc1a7d0a2d32bf6e39cc155 WatchSource:0}: Error finding container 8584d0ce629c841827cb8034248168b5e4642e600cc1a7d0a2d32bf6e39cc155: Status 404 returned error can't find the container with id 8584d0ce629c841827cb8034248168b5e4642e600cc1a7d0a2d32bf6e39cc155 Jan 21 21:33:00 crc kubenswrapper[4860]: E0121 21:33:00.580807 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 21:33:00 crc kubenswrapper[4860]: E0121 21:33:00.581436 4860 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 21:33:00 crc kubenswrapper[4860]: E0121 21:33:00.581800 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.148:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvwr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-57xwt_watcher-kuttl-default(d422ad12-0f54-467d-a449-3bdb5867c028): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:33:00 crc kubenswrapper[4860]: E0121 21:33:00.582995 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" Jan 21 21:33:00 crc kubenswrapper[4860]: I0121 21:33:00.750792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/crc-debug-zr7qh" event={"ID":"a28ae956-41bc-4160-8edc-f40247e5612d","Type":"ContainerStarted","Data":"8584d0ce629c841827cb8034248168b5e4642e600cc1a7d0a2d32bf6e39cc155"} Jan 21 21:33:00 crc kubenswrapper[4860]: E0121 21:33:00.752905 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" Jan 21 21:33:04 crc kubenswrapper[4860]: I0121 21:33:04.375050 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:11 crc kubenswrapper[4860]: I0121 21:33:11.884107 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/crc-debug-zr7qh" event={"ID":"a28ae956-41bc-4160-8edc-f40247e5612d","Type":"ContainerStarted","Data":"5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd"} Jan 21 21:33:11 crc kubenswrapper[4860]: I0121 21:33:11.919395 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/crc-debug-zr7qh" podStartSLOduration=13.244178032 podStartE2EDuration="23.919358137s" podCreationTimestamp="2026-01-21 21:32:48 +0000 UTC" firstStartedPulling="2026-01-21 21:33:00.568955292 +0000 UTC m=+1472.791133802" lastFinishedPulling="2026-01-21 21:33:11.244135437 +0000 UTC m=+1483.466313907" observedRunningTime="2026-01-21 21:33:11.903544253 +0000 UTC m=+1484.125722733" watchObservedRunningTime="2026-01-21 21:33:11.919358137 +0000 UTC m=+1484.141536637" Jan 21 21:33:14 crc kubenswrapper[4860]: I0121 21:33:14.916071 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" event={"ID":"d422ad12-0f54-467d-a449-3bdb5867c028","Type":"ContainerStarted","Data":"e7f7edb7f4948013fd49a01c78713fafd82849786f78b3e94dca0e23b5b102d9"} Jan 21 21:33:14 crc kubenswrapper[4860]: I0121 21:33:14.946954 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" podStartSLOduration=3.252250437 podStartE2EDuration="33.946915495s" podCreationTimestamp="2026-01-21 21:32:41 +0000 UTC" firstStartedPulling="2026-01-21 21:32:42.967023017 +0000 UTC m=+1455.189201487" lastFinishedPulling="2026-01-21 21:33:13.661688075 +0000 UTC m=+1485.883866545" observedRunningTime="2026-01-21 21:33:14.941893708 +0000 UTC m=+1487.164072178" watchObservedRunningTime="2026-01-21 21:33:14.946915495 +0000 UTC m=+1487.169093965" Jan 21 21:33:17 crc kubenswrapper[4860]: I0121 21:33:17.980544 4860 generic.go:334] "Generic (PLEG): container finished" podID="d422ad12-0f54-467d-a449-3bdb5867c028" containerID="e7f7edb7f4948013fd49a01c78713fafd82849786f78b3e94dca0e23b5b102d9" exitCode=0 Jan 21 21:33:17 crc kubenswrapper[4860]: I0121 21:33:17.980650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" event={"ID":"d422ad12-0f54-467d-a449-3bdb5867c028","Type":"ContainerDied","Data":"e7f7edb7f4948013fd49a01c78713fafd82849786f78b3e94dca0e23b5b102d9"} Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.352296 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.512790 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwr6\" (UniqueName: \"kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6\") pod \"d422ad12-0f54-467d-a449-3bdb5867c028\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.513372 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data\") pod \"d422ad12-0f54-467d-a449-3bdb5867c028\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.513478 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle\") pod \"d422ad12-0f54-467d-a449-3bdb5867c028\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.513509 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data\") pod \"d422ad12-0f54-467d-a449-3bdb5867c028\" (UID: \"d422ad12-0f54-467d-a449-3bdb5867c028\") " Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.521260 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d422ad12-0f54-467d-a449-3bdb5867c028" (UID: "d422ad12-0f54-467d-a449-3bdb5867c028"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.526350 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6" (OuterVolumeSpecName: "kube-api-access-xvwr6") pod "d422ad12-0f54-467d-a449-3bdb5867c028" (UID: "d422ad12-0f54-467d-a449-3bdb5867c028"). InnerVolumeSpecName "kube-api-access-xvwr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.542809 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d422ad12-0f54-467d-a449-3bdb5867c028" (UID: "d422ad12-0f54-467d-a449-3bdb5867c028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.566584 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data" (OuterVolumeSpecName: "config-data") pod "d422ad12-0f54-467d-a449-3bdb5867c028" (UID: "d422ad12-0f54-467d-a449-3bdb5867c028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.615582 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvwr6\" (UniqueName: \"kubernetes.io/projected/d422ad12-0f54-467d-a449-3bdb5867c028-kube-api-access-xvwr6\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.615621 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.615634 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:19 crc kubenswrapper[4860]: I0121 21:33:19.615645 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d422ad12-0f54-467d-a449-3bdb5867c028-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.001624 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" event={"ID":"d422ad12-0f54-467d-a449-3bdb5867c028","Type":"ContainerDied","Data":"f3b0d828927b020529f6ea5e6b0f71257c8676d12f40e5be08c9ccd420bb0cad"} Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.001697 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b0d828927b020529f6ea5e6b0f71257c8676d12f40e5be08c9ccd420bb0cad" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.001734 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-57xwt" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.416278 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: E0121 21:33:20.416859 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" containerName="watcher-kuttl-db-sync" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.416876 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" containerName="watcher-kuttl-db-sync" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.417204 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" containerName="watcher-kuttl-db-sync" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.419662 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.424810 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-k9bkl" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.425125 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.460749 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.510873 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.513994 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.517243 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.521398 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.539382 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.539461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.550627 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.550817 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.550863 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hh6\" (UniqueName: \"kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.550915 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.551196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.551336 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.551378 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdr9\" (UniqueName: \"kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.551491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.602676 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.605916 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.626049 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.638000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656496 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656575 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656615 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656663 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656694 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hh6\" (UniqueName: \"kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.656822 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658018 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658110 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtd8p\" (UniqueName: \"kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658181 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658225 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdr9\" (UniqueName: \"kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658302 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.658367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.660273 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.662391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.677861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.682702 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.682922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.688845 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.691591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.692119 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.704067 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdr9\" (UniqueName: \"kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.761298 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtd8p\" (UniqueName: \"kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.761412 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.761454 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.761476 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.762553 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.778070 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.778139 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hh6\" (UniqueName: \"kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6\") pod \"watcher-kuttl-api-0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.789682 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.797397 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtd8p\" (UniqueName: \"kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p\") pod \"watcher-kuttl-applier-0\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.865512 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:20 crc kubenswrapper[4860]: I0121 21:33:20.939988 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:21 crc kubenswrapper[4860]: I0121 21:33:21.050012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:21 crc kubenswrapper[4860]: I0121 21:33:21.393254 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:21 crc kubenswrapper[4860]: I0121 21:33:21.453054 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:21 crc kubenswrapper[4860]: I0121 21:33:21.511241 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:21 crc kubenswrapper[4860]: W0121 21:33:21.522952 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c131cf_40a3_4cfb_ac61_669a2ba7d8d6.slice/crio-39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167 WatchSource:0}: Error finding container 39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167: Status 404 returned error can't find the container with id 39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167 Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.027242 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerStarted","Data":"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03"} Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.027648 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerStarted","Data":"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed"} Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.027662 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerStarted","Data":"1d34574262566fafbfea1a8d10fc878e16c44e35dcd2bc313fdcc9f03af4d874"} Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.029718 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.034162 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": dial tcp 10.217.0.134:9322: connect: connection refused" Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.049219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc78f635-247b-4754-aba4-45f3d84ad917","Type":"ContainerStarted","Data":"42628c5ce2af5be7d227a63d5955ec093e34a2b7e475cedec5920db330a212f2"} Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.050729 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6","Type":"ContainerStarted","Data":"39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167"} Jan 21 21:33:22 crc kubenswrapper[4860]: I0121 21:33:22.064979 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.064930728 podStartE2EDuration="2.064930728s" podCreationTimestamp="2026-01-21 21:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:33:22.057606749 +0000 UTC m=+1494.279785219" watchObservedRunningTime="2026-01-21 21:33:22.064930728 +0000 UTC m=+1494.287109218" Jan 21 21:33:24 crc kubenswrapper[4860]: I0121 21:33:24.081687 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6","Type":"ContainerStarted","Data":"d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045"} Jan 21 21:33:24 crc kubenswrapper[4860]: I0121 21:33:24.084194 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc78f635-247b-4754-aba4-45f3d84ad917","Type":"ContainerStarted","Data":"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba"} Jan 21 21:33:24 crc kubenswrapper[4860]: I0121 21:33:24.111231 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.748899602 podStartE2EDuration="4.111185712s" podCreationTimestamp="2026-01-21 21:33:20 +0000 UTC" firstStartedPulling="2026-01-21 21:33:21.525553457 +0000 UTC m=+1493.747731927" lastFinishedPulling="2026-01-21 21:33:22.887839567 +0000 UTC m=+1495.110018037" observedRunningTime="2026-01-21 21:33:24.106887888 +0000 UTC m=+1496.329066368" watchObservedRunningTime="2026-01-21 21:33:24.111185712 +0000 UTC m=+1496.333364182" Jan 21 21:33:24 crc kubenswrapper[4860]: I0121 21:33:24.130912 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.653813396 podStartE2EDuration="4.130877778s" podCreationTimestamp="2026-01-21 21:33:20 +0000 UTC" firstStartedPulling="2026-01-21 21:33:21.40866724 +0000 UTC m=+1493.630845710" lastFinishedPulling="2026-01-21 21:33:22.885731612 +0000 UTC m=+1495.107910092" observedRunningTime="2026-01-21 21:33:24.127248164 +0000 UTC m=+1496.349426654" watchObservedRunningTime="2026-01-21 21:33:24.130877778 +0000 UTC m=+1496.353056248" Jan 21 21:33:25 crc kubenswrapper[4860]: I0121 21:33:25.820178 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:25 crc kubenswrapper[4860]: I0121 21:33:25.940887 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:26 crc kubenswrapper[4860]: I0121 21:33:26.051592 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:30 crc kubenswrapper[4860]: I0121 21:33:30.867067 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:30 crc kubenswrapper[4860]: I0121 21:33:30.912429 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:30 crc kubenswrapper[4860]: I0121 21:33:30.941006 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:30 crc kubenswrapper[4860]: I0121 21:33:30.970502 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.051060 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.062737 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.154969 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.159829 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.184534 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:31 crc kubenswrapper[4860]: I0121 21:33:31.185261 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:32 crc kubenswrapper[4860]: I0121 21:33:32.103989 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:33:32 crc kubenswrapper[4860]: I0121 21:33:32.104108 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.519470 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-57xwt"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.533373 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-57xwt"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.575386 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher55db-account-delete-xzzwt"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.577255 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.590301 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d422ad12-0f54-467d-a449-3bdb5867c028" path="/var/lib/kubelet/pods/d422ad12-0f54-467d-a449-3bdb5867c028/volumes" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.593058 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.594081 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cc78f635-247b-4754-aba4-45f3d84ad917" containerName="watcher-decision-engine" containerID="cri-o://3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba" gracePeriod=30 Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.605347 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher55db-account-delete-xzzwt"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.606558 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql279\" (UniqueName: \"kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.606649 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.662083 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.662419 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerName="watcher-applier" containerID="cri-o://d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" gracePeriod=30 Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.708492 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.708719 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql279\" (UniqueName: \"kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.709576 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.717587 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.718071 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-kuttl-api-log" containerID="cri-o://c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed" gracePeriod=30 Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.718081 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" containerID="cri-o://57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03" gracePeriod=30 Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.750990 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql279\" (UniqueName: \"kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279\") pod \"watcher55db-account-delete-xzzwt\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:34 crc kubenswrapper[4860]: I0121 21:33:34.905158 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.221673 4860 generic.go:334] "Generic (PLEG): container finished" podID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerID="c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed" exitCode=143 Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.222006 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerDied","Data":"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed"} Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.506160 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher55db-account-delete-xzzwt"] Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.835010 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.835442 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-central-agent" containerID="cri-o://cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d" gracePeriod=30 Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.835604 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="sg-core" containerID="cri-o://7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00" gracePeriod=30 Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.835708 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="proxy-httpd" containerID="cri-o://d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28" gracePeriod=30 Jan 21 21:33:35 crc kubenswrapper[4860]: I0121 21:33:35.835773 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-notification-agent" containerID="cri-o://b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b" gracePeriod=30 Jan 21 21:33:35 crc kubenswrapper[4860]: E0121 21:33:35.971705 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:33:35 crc kubenswrapper[4860]: E0121 21:33:35.981469 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:33:35 crc kubenswrapper[4860]: E0121 21:33:35.986212 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:33:35 crc kubenswrapper[4860]: E0121 21:33:35.986354 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerName="watcher-applier" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.234795 4860 generic.go:334] "Generic (PLEG): container finished" podID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerID="d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" exitCode=0 Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.234897 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6","Type":"ContainerDied","Data":"d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045"} Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.237551 4860 generic.go:334] "Generic (PLEG): container finished" podID="44e420cb-7b6e-459a-bf75-742e533d486b" containerID="a58d16d21c8247aa169e2b1c67f46234d9e2e2bd391821f34370ea0c1cda09e9" exitCode=0 Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.237691 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" event={"ID":"44e420cb-7b6e-459a-bf75-742e533d486b","Type":"ContainerDied","Data":"a58d16d21c8247aa169e2b1c67f46234d9e2e2bd391821f34370ea0c1cda09e9"} Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.237799 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" event={"ID":"44e420cb-7b6e-459a-bf75-742e533d486b","Type":"ContainerStarted","Data":"7a379212c54bcb9fcba2748de6a2cf49d6ffefe1f880dfbefcd2ace7a05a2069"} Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.241859 4860 generic.go:334] "Generic (PLEG): container finished" podID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerID="d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28" exitCode=0 Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.241895 4860 generic.go:334] "Generic (PLEG): container finished" podID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerID="7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00" exitCode=2 Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.242057 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerDied","Data":"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28"} Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.242172 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerDied","Data":"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00"} Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.358626 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": read tcp 10.217.0.2:41584->10.217.0.134:9322: read: connection reset by peer" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.358732 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": read tcp 10.217.0.2:41582->10.217.0.134:9322: read: connection reset by peer" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.584914 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.801499 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs\") pod \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.801643 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtd8p\" (UniqueName: \"kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p\") pod \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.801748 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data\") pod \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.801918 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle\") pod \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\" (UID: \"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6\") " Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.816004 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs" (OuterVolumeSpecName: "logs") pod "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" (UID: "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.830374 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p" (OuterVolumeSpecName: "kube-api-access-gtd8p") pod "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" (UID: "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6"). InnerVolumeSpecName "kube-api-access-gtd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.919754 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.919795 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtd8p\" (UniqueName: \"kubernetes.io/projected/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-kube-api-access-gtd8p\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.947065 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" (UID: "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:36 crc kubenswrapper[4860]: I0121 21:33:36.959045 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data" (OuterVolumeSpecName: "config-data") pod "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" (UID: "40c131cf-40a3-4cfb-ac61-669a2ba7d8d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.022381 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.022432 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.040117 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.123671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs\") pod \"99993d99-b364-4ca7-963c-00c9d08d78a0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.123739 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle\") pod \"99993d99-b364-4ca7-963c-00c9d08d78a0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.123799 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hh6\" (UniqueName: \"kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6\") pod \"99993d99-b364-4ca7-963c-00c9d08d78a0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.123826 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data\") pod \"99993d99-b364-4ca7-963c-00c9d08d78a0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.123872 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca\") pod \"99993d99-b364-4ca7-963c-00c9d08d78a0\" (UID: \"99993d99-b364-4ca7-963c-00c9d08d78a0\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.127165 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs" (OuterVolumeSpecName: "logs") pod "99993d99-b364-4ca7-963c-00c9d08d78a0" (UID: "99993d99-b364-4ca7-963c-00c9d08d78a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.167262 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6" (OuterVolumeSpecName: "kube-api-access-94hh6") pod "99993d99-b364-4ca7-963c-00c9d08d78a0" (UID: "99993d99-b364-4ca7-963c-00c9d08d78a0"). InnerVolumeSpecName "kube-api-access-94hh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.186496 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "99993d99-b364-4ca7-963c-00c9d08d78a0" (UID: "99993d99-b364-4ca7-963c-00c9d08d78a0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.188871 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data" (OuterVolumeSpecName: "config-data") pod "99993d99-b364-4ca7-963c-00c9d08d78a0" (UID: "99993d99-b364-4ca7-963c-00c9d08d78a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.191067 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99993d99-b364-4ca7-963c-00c9d08d78a0" (UID: "99993d99-b364-4ca7-963c-00c9d08d78a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.226831 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.226888 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hh6\" (UniqueName: \"kubernetes.io/projected/99993d99-b364-4ca7-963c-00c9d08d78a0-kube-api-access-94hh6\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.226904 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.226913 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99993d99-b364-4ca7-963c-00c9d08d78a0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.226922 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99993d99-b364-4ca7-963c-00c9d08d78a0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.258008 4860 generic.go:334] "Generic (PLEG): container finished" podID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerID="cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d" exitCode=0 Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.258091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerDied","Data":"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d"} Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.260502 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"40c131cf-40a3-4cfb-ac61-669a2ba7d8d6","Type":"ContainerDied","Data":"39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167"} Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.260570 4860 scope.go:117] "RemoveContainer" containerID="d3bf948ee108aa8c7b5e8d12efaf47c0c80da38fed3521cede76968372f9f045" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.260723 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.268115 4860 generic.go:334] "Generic (PLEG): container finished" podID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerID="57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03" exitCode=0 Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.268224 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.268198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerDied","Data":"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03"} Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.268307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"99993d99-b364-4ca7-963c-00c9d08d78a0","Type":"ContainerDied","Data":"1d34574262566fafbfea1a8d10fc878e16c44e35dcd2bc313fdcc9f03af4d874"} Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.313336 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.320394 4860 scope.go:117] "RemoveContainer" containerID="57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.327939 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:33:37 crc kubenswrapper[4860]: E0121 21:33:37.339324 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c131cf_40a3_4cfb_ac61_669a2ba7d8d6.slice/crio-39a26c6656f534689b35678a9205b41a085f17058b81bc94d30b229fdeee8167\": RecentStats: unable to find data in memory cache]" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.343186 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.360180 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.449507 4860 scope.go:117] "RemoveContainer" containerID="c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.478117 4860 scope.go:117] "RemoveContainer" containerID="57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03" Jan 21 21:33:37 crc kubenswrapper[4860]: E0121 21:33:37.480665 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03\": container with ID starting with 57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03 not found: ID does not exist" containerID="57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.480715 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03"} err="failed to get container status \"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03\": rpc error: code = NotFound desc = could not find container \"57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03\": container with ID starting with 57f09e989b4d698d3064623a13a597100eeadc39065d09af04d098762cb2de03 not found: ID does not exist" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.480748 4860 scope.go:117] "RemoveContainer" containerID="c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed" Jan 21 21:33:37 crc kubenswrapper[4860]: E0121 21:33:37.481357 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed\": container with ID starting with c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed not found: ID does not exist" containerID="c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.481385 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed"} err="failed to get container status \"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed\": rpc error: code = NotFound desc = could not find container \"c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed\": container with ID starting with c1f905f3cce84c27964b2d9fbe90bfcbb27c154cf0a16420bb9f14bfaa7d91ed not found: ID does not exist" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.667070 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.738459 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql279\" (UniqueName: \"kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279\") pod \"44e420cb-7b6e-459a-bf75-742e533d486b\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.738609 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts\") pod \"44e420cb-7b6e-459a-bf75-742e533d486b\" (UID: \"44e420cb-7b6e-459a-bf75-742e533d486b\") " Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.740342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44e420cb-7b6e-459a-bf75-742e533d486b" (UID: "44e420cb-7b6e-459a-bf75-742e533d486b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.751435 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279" (OuterVolumeSpecName: "kube-api-access-ql279") pod "44e420cb-7b6e-459a-bf75-742e533d486b" (UID: "44e420cb-7b6e-459a-bf75-742e533d486b"). InnerVolumeSpecName "kube-api-access-ql279". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.844020 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql279\" (UniqueName: \"kubernetes.io/projected/44e420cb-7b6e-459a-bf75-742e533d486b-kube-api-access-ql279\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.844085 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e420cb-7b6e-459a-bf75-742e533d486b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:37 crc kubenswrapper[4860]: I0121 21:33:37.980951 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.049610 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca\") pod \"cc78f635-247b-4754-aba4-45f3d84ad917\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.050539 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data\") pod \"cc78f635-247b-4754-aba4-45f3d84ad917\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.050613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle\") pod \"cc78f635-247b-4754-aba4-45f3d84ad917\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.050698 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdr9\" (UniqueName: \"kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9\") pod \"cc78f635-247b-4754-aba4-45f3d84ad917\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.050836 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs\") pod \"cc78f635-247b-4754-aba4-45f3d84ad917\" (UID: \"cc78f635-247b-4754-aba4-45f3d84ad917\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.053368 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs" (OuterVolumeSpecName: "logs") pod "cc78f635-247b-4754-aba4-45f3d84ad917" (UID: "cc78f635-247b-4754-aba4-45f3d84ad917"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.056836 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9" (OuterVolumeSpecName: "kube-api-access-sfdr9") pod "cc78f635-247b-4754-aba4-45f3d84ad917" (UID: "cc78f635-247b-4754-aba4-45f3d84ad917"). InnerVolumeSpecName "kube-api-access-sfdr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.063364 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.074419 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cc78f635-247b-4754-aba4-45f3d84ad917" (UID: "cc78f635-247b-4754-aba4-45f3d84ad917"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.097024 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc78f635-247b-4754-aba4-45f3d84ad917" (UID: "cc78f635-247b-4754-aba4-45f3d84ad917"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.136202 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data" (OuterVolumeSpecName: "config-data") pod "cc78f635-247b-4754-aba4-45f3d84ad917" (UID: "cc78f635-247b-4754-aba4-45f3d84ad917"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154014 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wgp2\" (UniqueName: \"kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154209 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154250 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154351 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154591 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154633 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.154662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data\") pod \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\" (UID: \"6c28f496-1ef7-4df4-aed1-96bf3641e4ff\") " Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.155064 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc78f635-247b-4754-aba4-45f3d84ad917-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.155078 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.155092 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.155103 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc78f635-247b-4754-aba4-45f3d84ad917-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.155113 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdr9\" (UniqueName: \"kubernetes.io/projected/cc78f635-247b-4754-aba4-45f3d84ad917-kube-api-access-sfdr9\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.159607 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2" (OuterVolumeSpecName: "kube-api-access-5wgp2") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "kube-api-access-5wgp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.163372 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.163437 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.170712 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts" (OuterVolumeSpecName: "scripts") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.185653 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.238651 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258073 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258140 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wgp2\" (UniqueName: \"kubernetes.io/projected/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-kube-api-access-5wgp2\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258153 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258165 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258173 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.258182 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.261885 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.283535 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" event={"ID":"44e420cb-7b6e-459a-bf75-742e533d486b","Type":"ContainerDied","Data":"7a379212c54bcb9fcba2748de6a2cf49d6ffefe1f880dfbefcd2ace7a05a2069"} Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.283597 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a379212c54bcb9fcba2748de6a2cf49d6ffefe1f880dfbefcd2ace7a05a2069" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.283678 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher55db-account-delete-xzzwt" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.290903 4860 generic.go:334] "Generic (PLEG): container finished" podID="cc78f635-247b-4754-aba4-45f3d84ad917" containerID="3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba" exitCode=0 Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.291042 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc78f635-247b-4754-aba4-45f3d84ad917","Type":"ContainerDied","Data":"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba"} Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.291079 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc78f635-247b-4754-aba4-45f3d84ad917","Type":"ContainerDied","Data":"42628c5ce2af5be7d227a63d5955ec093e34a2b7e475cedec5920db330a212f2"} Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.291119 4860 scope.go:117] "RemoveContainer" containerID="3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.291274 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.307105 4860 generic.go:334] "Generic (PLEG): container finished" podID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerID="b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b" exitCode=0 Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.307221 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data" (OuterVolumeSpecName: "config-data") pod "6c28f496-1ef7-4df4-aed1-96bf3641e4ff" (UID: "6c28f496-1ef7-4df4-aed1-96bf3641e4ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.307306 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.307327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerDied","Data":"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b"} Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.307520 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6c28f496-1ef7-4df4-aed1-96bf3641e4ff","Type":"ContainerDied","Data":"a362986d7394c0f5fd2ab9342dff171d173379f6295bf0ef845ae03cc0c36e32"} Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.350685 4860 scope.go:117] "RemoveContainer" containerID="3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.351376 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba\": container with ID starting with 3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba not found: ID does not exist" containerID="3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.351434 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba"} err="failed to get container status \"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba\": rpc error: code = NotFound desc = could not find container \"3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba\": container with ID starting with 3ebf87c25bdc2f5e3b01dcbf33f959eaa497375217efeb2f74e8185267422bba not found: ID does not exist" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.351467 4860 scope.go:117] "RemoveContainer" containerID="d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.359722 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.359753 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28f496-1ef7-4df4-aed1-96bf3641e4ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.366281 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.375118 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.380675 4860 scope.go:117] "RemoveContainer" containerID="7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.396803 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.401694 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.422525 4860 scope.go:117] "RemoveContainer" containerID="b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427133 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427622 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-central-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427645 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-central-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427663 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="proxy-httpd" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427669 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="proxy-httpd" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427682 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc78f635-247b-4754-aba4-45f3d84ad917" containerName="watcher-decision-engine" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427714 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc78f635-247b-4754-aba4-45f3d84ad917" containerName="watcher-decision-engine" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427796 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-notification-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427808 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-notification-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427819 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e420cb-7b6e-459a-bf75-742e533d486b" containerName="mariadb-account-delete" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427844 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e420cb-7b6e-459a-bf75-742e533d486b" containerName="mariadb-account-delete" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427857 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-kuttl-api-log" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427865 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-kuttl-api-log" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427880 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="sg-core" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427889 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="sg-core" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427897 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerName="watcher-applier" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427904 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerName="watcher-applier" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.427916 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.427925 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428236 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" containerName="watcher-applier" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428257 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-kuttl-api-log" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428269 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-central-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428277 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="sg-core" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428289 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="ceilometer-notification-agent" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428300 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e420cb-7b6e-459a-bf75-742e533d486b" containerName="mariadb-account-delete" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428318 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" containerName="proxy-httpd" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428328 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" containerName="watcher-api" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.428342 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc78f635-247b-4754-aba4-45f3d84ad917" containerName="watcher-decision-engine" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.431064 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.440592 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.440896 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.441507 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.453367 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.460823 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.460928 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.460986 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.461040 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.461082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.461106 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.461134 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.461158 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n4n\" (UniqueName: \"kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.475417 4860 scope.go:117] "RemoveContainer" containerID="cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.511494 4860 scope.go:117] "RemoveContainer" containerID="d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.512441 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28\": container with ID starting with d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28 not found: ID does not exist" containerID="d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.512524 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28"} err="failed to get container status \"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28\": rpc error: code = NotFound desc = could not find container \"d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28\": container with ID starting with d7e5dcc495bc19b4b913ef4b6afab75f46d74769848a29983beeb31f0e5bbc28 not found: ID does not exist" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.512570 4860 scope.go:117] "RemoveContainer" containerID="7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.513269 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00\": container with ID starting with 7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00 not found: ID does not exist" containerID="7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.513294 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00"} err="failed to get container status \"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00\": rpc error: code = NotFound desc = could not find container \"7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00\": container with ID starting with 7465f41829d12489b45e9a2cc3fdcda213f3935d5a7193975d846feda62f5c00 not found: ID does not exist" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.513308 4860 scope.go:117] "RemoveContainer" containerID="b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.513599 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b\": container with ID starting with b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b not found: ID does not exist" containerID="b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.513644 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b"} err="failed to get container status \"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b\": rpc error: code = NotFound desc = could not find container \"b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b\": container with ID starting with b19335382b900616fb8fe14d3714906fd7d0783f9bc6ec35cfcbafaeee59fc3b not found: ID does not exist" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.513659 4860 scope.go:117] "RemoveContainer" containerID="cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d" Jan 21 21:33:38 crc kubenswrapper[4860]: E0121 21:33:38.514307 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d\": container with ID starting with cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d not found: ID does not exist" containerID="cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.514331 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d"} err="failed to get container status \"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d\": rpc error: code = NotFound desc = could not find container \"cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d\": container with ID starting with cd3569b530aa952de4411101526fe29afac2a50f1e7f65032d950d4a541bda5d not found: ID does not exist" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563401 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563549 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563613 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92n4n\" (UniqueName: \"kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563733 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563813 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.563848 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.564424 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.564507 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.569187 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.569269 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.571880 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.572348 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.579796 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.586243 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n4n\" (UniqueName: \"kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n\") pod \"ceilometer-0\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.604221 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c131cf-40a3-4cfb-ac61-669a2ba7d8d6" path="/var/lib/kubelet/pods/40c131cf-40a3-4cfb-ac61-669a2ba7d8d6/volumes" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.604914 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c28f496-1ef7-4df4-aed1-96bf3641e4ff" path="/var/lib/kubelet/pods/6c28f496-1ef7-4df4-aed1-96bf3641e4ff/volumes" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.606216 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99993d99-b364-4ca7-963c-00c9d08d78a0" path="/var/lib/kubelet/pods/99993d99-b364-4ca7-963c-00c9d08d78a0/volumes" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.610616 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc78f635-247b-4754-aba4-45f3d84ad917" path="/var/lib/kubelet/pods/cc78f635-247b-4754-aba4-45f3d84ad917/volumes" Jan 21 21:33:38 crc kubenswrapper[4860]: I0121 21:33:38.769206 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.143318 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.325497 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerStarted","Data":"5e9786cb8ede24db74543f975c5ffaa8c2295df11c8e97c737e5e3f74de9ba9d"} Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.628923 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xlmw8"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.638617 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xlmw8"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.647403 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-55db-account-create-update-6xm4r"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.659180 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher55db-account-delete-xzzwt"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.676667 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-55db-account-create-update-6xm4r"] Jan 21 21:33:39 crc kubenswrapper[4860]: I0121 21:33:39.690286 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher55db-account-delete-xzzwt"] Jan 21 21:33:40 crc kubenswrapper[4860]: I0121 21:33:40.339884 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerStarted","Data":"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a"} Jan 21 21:33:40 crc kubenswrapper[4860]: I0121 21:33:40.596545 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e420cb-7b6e-459a-bf75-742e533d486b" path="/var/lib/kubelet/pods/44e420cb-7b6e-459a-bf75-742e533d486b/volumes" Jan 21 21:33:40 crc kubenswrapper[4860]: I0121 21:33:40.597880 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50bb1894-3b38-44ab-b3cf-bf2e334673b4" path="/var/lib/kubelet/pods/50bb1894-3b38-44ab-b3cf-bf2e334673b4/volumes" Jan 21 21:33:40 crc kubenswrapper[4860]: I0121 21:33:40.598507 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea08dc1-90f2-4d48-844e-a4eb915e2470" path="/var/lib/kubelet/pods/fea08dc1-90f2-4d48-844e-a4eb915e2470/volumes" Jan 21 21:33:41 crc kubenswrapper[4860]: I0121 21:33:41.350671 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerStarted","Data":"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b"} Jan 21 21:33:42 crc kubenswrapper[4860]: I0121 21:33:42.362793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerStarted","Data":"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed"} Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.040093 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-5459-account-create-update-sfqxg"] Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.041516 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.050472 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.060169 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-rx9wf"] Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.061537 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.061611 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.062090 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.090459 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rx9wf"] Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.103047 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5459-account-create-update-sfqxg"] Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.164424 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.164539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dql\" (UniqueName: \"kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.164629 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.164665 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.166694 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.192742 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g\") pod \"watcher-5459-account-create-update-sfqxg\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.267227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.267341 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5dql\" (UniqueName: \"kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.268642 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.302947 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5dql\" (UniqueName: \"kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql\") pod \"watcher-db-create-rx9wf\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.379145 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:43 crc kubenswrapper[4860]: I0121 21:33:43.420489 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.002211 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5459-account-create-update-sfqxg"] Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.106227 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rx9wf"] Jan 21 21:33:44 crc kubenswrapper[4860]: W0121 21:33:44.115813 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c99a2e_dff6_49dd_8fdc_c69654d8fa67.slice/crio-22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b WatchSource:0}: Error finding container 22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b: Status 404 returned error can't find the container with id 22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.385219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerStarted","Data":"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8"} Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.385695 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.388017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" event={"ID":"358ec7aa-66f1-47a5-ae23-de77490acea4","Type":"ContainerStarted","Data":"358db26bed2e6a3e77a7308da8d7aa133241c2480b5a3e0bffbcb04012546a22"} Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.388047 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" event={"ID":"358ec7aa-66f1-47a5-ae23-de77490acea4","Type":"ContainerStarted","Data":"8a196d8c2ccf1a2114e9764f677cfcedd58bd903093aba2eae745f58fb32c915"} Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.389565 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rx9wf" event={"ID":"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67","Type":"ContainerStarted","Data":"c081d5c9262a9ac1caf9fd9368718efc6ac592af7f3d8b29611ed2510b8ad0db"} Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.389593 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rx9wf" event={"ID":"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67","Type":"ContainerStarted","Data":"22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b"} Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.431144 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.481649407 podStartE2EDuration="6.431101831s" podCreationTimestamp="2026-01-21 21:33:38 +0000 UTC" firstStartedPulling="2026-01-21 21:33:39.138718492 +0000 UTC m=+1511.360896962" lastFinishedPulling="2026-01-21 21:33:43.088170916 +0000 UTC m=+1515.310349386" observedRunningTime="2026-01-21 21:33:44.418182466 +0000 UTC m=+1516.640360936" watchObservedRunningTime="2026-01-21 21:33:44.431101831 +0000 UTC m=+1516.653280311" Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.478427 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" podStartSLOduration=1.4784098399999999 podStartE2EDuration="1.47840984s" podCreationTimestamp="2026-01-21 21:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:33:44.475497669 +0000 UTC m=+1516.697676139" watchObservedRunningTime="2026-01-21 21:33:44.47840984 +0000 UTC m=+1516.700588310" Jan 21 21:33:44 crc kubenswrapper[4860]: I0121 21:33:44.482181 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-rx9wf" podStartSLOduration=1.482168158 podStartE2EDuration="1.482168158s" podCreationTimestamp="2026-01-21 21:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:33:44.459673505 +0000 UTC m=+1516.681851975" watchObservedRunningTime="2026-01-21 21:33:44.482168158 +0000 UTC m=+1516.704346628" Jan 21 21:33:45 crc kubenswrapper[4860]: I0121 21:33:45.404130 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" containerID="c081d5c9262a9ac1caf9fd9368718efc6ac592af7f3d8b29611ed2510b8ad0db" exitCode=0 Jan 21 21:33:45 crc kubenswrapper[4860]: I0121 21:33:45.404218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rx9wf" event={"ID":"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67","Type":"ContainerDied","Data":"c081d5c9262a9ac1caf9fd9368718efc6ac592af7f3d8b29611ed2510b8ad0db"} Jan 21 21:33:45 crc kubenswrapper[4860]: I0121 21:33:45.410124 4860 generic.go:334] "Generic (PLEG): container finished" podID="358ec7aa-66f1-47a5-ae23-de77490acea4" containerID="358db26bed2e6a3e77a7308da8d7aa133241c2480b5a3e0bffbcb04012546a22" exitCode=0 Jan 21 21:33:45 crc kubenswrapper[4860]: I0121 21:33:45.410210 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" event={"ID":"358ec7aa-66f1-47a5-ae23-de77490acea4","Type":"ContainerDied","Data":"358db26bed2e6a3e77a7308da8d7aa133241c2480b5a3e0bffbcb04012546a22"} Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.029998 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.036077 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.140920 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts\") pod \"358ec7aa-66f1-47a5-ae23-de77490acea4\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.141103 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g\") pod \"358ec7aa-66f1-47a5-ae23-de77490acea4\" (UID: \"358ec7aa-66f1-47a5-ae23-de77490acea4\") " Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.141287 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5dql\" (UniqueName: \"kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql\") pod \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.141521 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts\") pod \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\" (UID: \"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67\") " Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.141705 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "358ec7aa-66f1-47a5-ae23-de77490acea4" (UID: "358ec7aa-66f1-47a5-ae23-de77490acea4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.142012 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358ec7aa-66f1-47a5-ae23-de77490acea4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.142460 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" (UID: "e2c99a2e-dff6-49dd-8fdc-c69654d8fa67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.151627 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql" (OuterVolumeSpecName: "kube-api-access-q5dql") pod "e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" (UID: "e2c99a2e-dff6-49dd-8fdc-c69654d8fa67"). InnerVolumeSpecName "kube-api-access-q5dql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.156983 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g" (OuterVolumeSpecName: "kube-api-access-69q6g") pod "358ec7aa-66f1-47a5-ae23-de77490acea4" (UID: "358ec7aa-66f1-47a5-ae23-de77490acea4"). InnerVolumeSpecName "kube-api-access-69q6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.244888 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5dql\" (UniqueName: \"kubernetes.io/projected/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-kube-api-access-q5dql\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.244969 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.244985 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/358ec7aa-66f1-47a5-ae23-de77490acea4-kube-api-access-69q6g\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.446235 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" event={"ID":"358ec7aa-66f1-47a5-ae23-de77490acea4","Type":"ContainerDied","Data":"8a196d8c2ccf1a2114e9764f677cfcedd58bd903093aba2eae745f58fb32c915"} Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.446386 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a196d8c2ccf1a2114e9764f677cfcedd58bd903093aba2eae745f58fb32c915" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.446274 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5459-account-create-update-sfqxg" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.449678 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rx9wf" event={"ID":"e2c99a2e-dff6-49dd-8fdc-c69654d8fa67","Type":"ContainerDied","Data":"22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b"} Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.449733 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22525062b7c6ad3d95d940239c65131d6ee4c9862b25ac984adb25353173de8b" Jan 21 21:33:47 crc kubenswrapper[4860]: I0121 21:33:47.449826 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rx9wf" Jan 21 21:33:47 crc kubenswrapper[4860]: E0121 21:33:47.633791 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod358ec7aa_66f1_47a5_ae23_de77490acea4.slice\": RecentStats: unable to find data in memory cache]" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.392765 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2ww86"] Jan 21 21:33:53 crc kubenswrapper[4860]: E0121 21:33:53.393956 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358ec7aa-66f1-47a5-ae23-de77490acea4" containerName="mariadb-account-create-update" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.393972 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="358ec7aa-66f1-47a5-ae23-de77490acea4" containerName="mariadb-account-create-update" Jan 21 21:33:53 crc kubenswrapper[4860]: E0121 21:33:53.394001 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" containerName="mariadb-database-create" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.394007 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" containerName="mariadb-database-create" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.394192 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" containerName="mariadb-database-create" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.394204 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="358ec7aa-66f1-47a5-ae23-de77490acea4" containerName="mariadb-account-create-update" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.394880 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.398587 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.398888 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jr6s5" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.411725 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2ww86"] Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.586704 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.586842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.586897 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8dv8\" (UniqueName: \"kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.586946 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.689173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.689411 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.689631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.689718 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8dv8\" (UniqueName: \"kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.699855 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.701022 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.709639 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:53 crc kubenswrapper[4860]: I0121 21:33:53.739752 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8dv8\" (UniqueName: \"kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8\") pod \"watcher-kuttl-db-sync-2ww86\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:54 crc kubenswrapper[4860]: I0121 21:33:54.024337 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:54 crc kubenswrapper[4860]: I0121 21:33:54.548982 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2ww86"] Jan 21 21:33:55 crc kubenswrapper[4860]: I0121 21:33:55.530711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" event={"ID":"a468d56c-b296-4927-b2dd-ea4d951ec5bd","Type":"ContainerStarted","Data":"dc5c685ee1d3d36d41163c540ec271c882f501aa00b5c3db55708809fead6568"} Jan 21 21:33:55 crc kubenswrapper[4860]: I0121 21:33:55.533413 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" event={"ID":"a468d56c-b296-4927-b2dd-ea4d951ec5bd","Type":"ContainerStarted","Data":"3456472e85d7cdffde2b329c9ef6067928ad2e5b2c0b3c4af8f916ced370a80d"} Jan 21 21:33:55 crc kubenswrapper[4860]: I0121 21:33:55.572614 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" podStartSLOduration=2.572586502 podStartE2EDuration="2.572586502s" podCreationTimestamp="2026-01-21 21:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:33:55.566987427 +0000 UTC m=+1527.789165927" watchObservedRunningTime="2026-01-21 21:33:55.572586502 +0000 UTC m=+1527.794764972" Jan 21 21:33:57 crc kubenswrapper[4860]: I0121 21:33:57.555194 4860 generic.go:334] "Generic (PLEG): container finished" podID="a468d56c-b296-4927-b2dd-ea4d951ec5bd" containerID="dc5c685ee1d3d36d41163c540ec271c882f501aa00b5c3db55708809fead6568" exitCode=0 Jan 21 21:33:57 crc kubenswrapper[4860]: I0121 21:33:57.555306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" event={"ID":"a468d56c-b296-4927-b2dd-ea4d951ec5bd","Type":"ContainerDied","Data":"dc5c685ee1d3d36d41163c540ec271c882f501aa00b5c3db55708809fead6568"} Jan 21 21:33:58 crc kubenswrapper[4860]: I0121 21:33:58.967051 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.100062 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data\") pod \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.100278 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data\") pod \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.100340 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8dv8\" (UniqueName: \"kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8\") pod \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.100412 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle\") pod \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\" (UID: \"a468d56c-b296-4927-b2dd-ea4d951ec5bd\") " Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.113291 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a468d56c-b296-4927-b2dd-ea4d951ec5bd" (UID: "a468d56c-b296-4927-b2dd-ea4d951ec5bd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.118313 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8" (OuterVolumeSpecName: "kube-api-access-h8dv8") pod "a468d56c-b296-4927-b2dd-ea4d951ec5bd" (UID: "a468d56c-b296-4927-b2dd-ea4d951ec5bd"). InnerVolumeSpecName "kube-api-access-h8dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.137895 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a468d56c-b296-4927-b2dd-ea4d951ec5bd" (UID: "a468d56c-b296-4927-b2dd-ea4d951ec5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.150209 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data" (OuterVolumeSpecName: "config-data") pod "a468d56c-b296-4927-b2dd-ea4d951ec5bd" (UID: "a468d56c-b296-4927-b2dd-ea4d951ec5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.202525 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.202587 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.202604 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8dv8\" (UniqueName: \"kubernetes.io/projected/a468d56c-b296-4927-b2dd-ea4d951ec5bd-kube-api-access-h8dv8\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.202618 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a468d56c-b296-4927-b2dd-ea4d951ec5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.595182 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" event={"ID":"a468d56c-b296-4927-b2dd-ea4d951ec5bd","Type":"ContainerDied","Data":"3456472e85d7cdffde2b329c9ef6067928ad2e5b2c0b3c4af8f916ced370a80d"} Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.595242 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3456472e85d7cdffde2b329c9ef6067928ad2e5b2c0b3c4af8f916ced370a80d" Jan 21 21:33:59 crc kubenswrapper[4860]: I0121 21:33:59.595348 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2ww86" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.096984 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: E0121 21:34:00.097515 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a468d56c-b296-4927-b2dd-ea4d951ec5bd" containerName="watcher-kuttl-db-sync" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.097535 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a468d56c-b296-4927-b2dd-ea4d951ec5bd" containerName="watcher-kuttl-db-sync" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.097745 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a468d56c-b296-4927-b2dd-ea4d951ec5bd" containerName="watcher-kuttl-db-sync" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.099007 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.102795 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.107492 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jr6s5" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.119224 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.127282 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.129272 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.152237 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.152689 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.153006 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.172379 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.218626 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.220611 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.225825 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.226549 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227628 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227661 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227683 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227711 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227795 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pscq\" (UniqueName: \"kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227847 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227869 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhnk\" (UniqueName: \"kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227922 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.227979 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.228001 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.329917 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330130 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pscq\" (UniqueName: \"kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330179 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330207 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhnk\" (UniqueName: \"kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4pz\" (UniqueName: \"kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330287 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330307 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330377 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330397 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330416 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330464 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330509 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330533 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.330856 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.333448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.340234 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.340836 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.341804 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.345201 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.346973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.347095 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.349877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.352582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.356050 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhnk\" (UniqueName: \"kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk\") pod \"watcher-kuttl-api-0\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.356240 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pscq\" (UniqueName: \"kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.418333 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.432125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.432194 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.432438 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.432566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4pz\" (UniqueName: \"kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.433093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.439876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.445849 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.458096 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4pz\" (UniqueName: \"kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz\") pod \"watcher-kuttl-applier-0\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.466353 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:00 crc kubenswrapper[4860]: I0121 21:34:00.542726 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.044722 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.059104 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.218893 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:01 crc kubenswrapper[4860]: W0121 21:34:01.221428 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d2c1f1_9d86_4e72_8728_38bf5bc3c674.slice/crio-09ce82f46da9a6b0d4e2fe35451efcd326a12bb96c308a177a43e6a7cc13bc91 WatchSource:0}: Error finding container 09ce82f46da9a6b0d4e2fe35451efcd326a12bb96c308a177a43e6a7cc13bc91: Status 404 returned error can't find the container with id 09ce82f46da9a6b0d4e2fe35451efcd326a12bb96c308a177a43e6a7cc13bc91 Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.627474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"24d2c1f1-9d86-4e72-8728-38bf5bc3c674","Type":"ContainerStarted","Data":"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.627977 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"24d2c1f1-9d86-4e72-8728-38bf5bc3c674","Type":"ContainerStarted","Data":"09ce82f46da9a6b0d4e2fe35451efcd326a12bb96c308a177a43e6a7cc13bc91"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.629956 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1938d0be-9281-4d7a-b9b1-84c5844428bd","Type":"ContainerStarted","Data":"456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.630010 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1938d0be-9281-4d7a-b9b1-84c5844428bd","Type":"ContainerStarted","Data":"fae2c36bb3cde59d2a3e33b56b6d88a520d2892b10b3cd776aade60fa1f51e33"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.632776 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerStarted","Data":"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.632809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerStarted","Data":"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.632823 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerStarted","Data":"ab837de8b12245ce7f5163c7ce1826578fc002f81a5f2158886ff02f679068ed"} Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.633053 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.635152 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.143:9322/\": dial tcp 10.217.0.143:9322: connect: connection refused" Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.654006 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.65396755 podStartE2EDuration="1.65396755s" podCreationTimestamp="2026-01-21 21:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:01.648490488 +0000 UTC m=+1533.870668968" watchObservedRunningTime="2026-01-21 21:34:01.65396755 +0000 UTC m=+1533.876146030" Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.675833 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.6758136829999999 podStartE2EDuration="1.675813683s" podCreationTimestamp="2026-01-21 21:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:01.672073075 +0000 UTC m=+1533.894251545" watchObservedRunningTime="2026-01-21 21:34:01.675813683 +0000 UTC m=+1533.897992153" Jan 21 21:34:01 crc kubenswrapper[4860]: I0121 21:34:01.698016 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.697996267 podStartE2EDuration="1.697996267s" podCreationTimestamp="2026-01-21 21:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:01.691622807 +0000 UTC m=+1533.913801277" watchObservedRunningTime="2026-01-21 21:34:01.697996267 +0000 UTC m=+1533.920174737" Jan 21 21:34:02 crc kubenswrapper[4860]: I0121 21:34:02.103803 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:34:02 crc kubenswrapper[4860]: I0121 21:34:02.103881 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:34:05 crc kubenswrapper[4860]: I0121 21:34:05.232531 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:05 crc kubenswrapper[4860]: I0121 21:34:05.468139 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:05 crc kubenswrapper[4860]: I0121 21:34:05.543834 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:08 crc kubenswrapper[4860]: I0121 21:34:08.779972 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.423441 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.467514 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.481008 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.486106 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.543787 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.570458 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.736359 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.747383 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.773289 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:10 crc kubenswrapper[4860]: I0121 21:34:10.779401 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.059986 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.060637 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-central-agent" containerID="cri-o://b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a" gracePeriod=30 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.060786 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="sg-core" containerID="cri-o://b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed" gracePeriod=30 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.060887 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="proxy-httpd" containerID="cri-o://429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8" gracePeriod=30 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.060786 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-notification-agent" containerID="cri-o://7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b" gracePeriod=30 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.774788 4860 generic.go:334] "Generic (PLEG): container finished" podID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerID="429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8" exitCode=0 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.775361 4860 generic.go:334] "Generic (PLEG): container finished" podID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerID="b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed" exitCode=2 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.775377 4860 generic.go:334] "Generic (PLEG): container finished" podID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerID="b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a" exitCode=0 Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.774876 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerDied","Data":"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8"} Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.775461 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerDied","Data":"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed"} Jan 21 21:34:13 crc kubenswrapper[4860]: I0121 21:34:13.775487 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerDied","Data":"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a"} Jan 21 21:34:14 crc kubenswrapper[4860]: I0121 21:34:14.235474 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:14 crc kubenswrapper[4860]: I0121 21:34:14.235928 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" containerID="cri-o://6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe" gracePeriod=30 Jan 21 21:34:14 crc kubenswrapper[4860]: I0121 21:34:14.236254 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-kuttl-api-log" containerID="cri-o://2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3" gracePeriod=30 Jan 21 21:34:14 crc kubenswrapper[4860]: I0121 21:34:14.789306 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerID="2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3" exitCode=143 Jan 21 21:34:14 crc kubenswrapper[4860]: I0121 21:34:14.789362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerDied","Data":"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3"} Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.596658 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.699913 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700261 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700347 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700392 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700421 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.700444 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhnk\" (UniqueName: \"kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk\") pod \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\" (UID: \"0ce29365-727f-46f1-ba00-6cf789b1cf1f\") " Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.701909 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs" (OuterVolumeSpecName: "logs") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.709026 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk" (OuterVolumeSpecName: "kube-api-access-lxhnk") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "kube-api-access-lxhnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.745450 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.747459 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.754619 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.759443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data" (OuterVolumeSpecName: "config-data") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.768994 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0ce29365-727f-46f1-ba00-6cf789b1cf1f" (UID: "0ce29365-727f-46f1-ba00-6cf789b1cf1f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804374 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerID="6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe" exitCode=0 Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804454 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerDied","Data":"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe"} Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804499 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0ce29365-727f-46f1-ba00-6cf789b1cf1f","Type":"ContainerDied","Data":"ab837de8b12245ce7f5163c7ce1826578fc002f81a5f2158886ff02f679068ed"} Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804521 4860 scope.go:117] "RemoveContainer" containerID="6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804673 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804780 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804872 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ce29365-727f-46f1-ba00-6cf789b1cf1f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804970 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhnk\" (UniqueName: \"kubernetes.io/projected/0ce29365-727f-46f1-ba00-6cf789b1cf1f-kube-api-access-lxhnk\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.805073 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.805137 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.805193 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce29365-727f-46f1-ba00-6cf789b1cf1f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.804714 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.853022 4860 scope.go:117] "RemoveContainer" containerID="2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.872377 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.891196 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.895768 4860 scope.go:117] "RemoveContainer" containerID="6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe" Jan 21 21:34:15 crc kubenswrapper[4860]: E0121 21:34:15.896604 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe\": container with ID starting with 6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe not found: ID does not exist" containerID="6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.896762 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe"} err="failed to get container status \"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe\": rpc error: code = NotFound desc = could not find container \"6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe\": container with ID starting with 6bef0ada97085fcb77994febf9bb6cdde9b0245842851cf1f6c546fc2d8142fe not found: ID does not exist" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.896890 4860 scope.go:117] "RemoveContainer" containerID="2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3" Jan 21 21:34:15 crc kubenswrapper[4860]: E0121 21:34:15.898878 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3\": container with ID starting with 2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3 not found: ID does not exist" containerID="2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.898960 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3"} err="failed to get container status \"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3\": rpc error: code = NotFound desc = could not find container \"2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3\": container with ID starting with 2ab6a194aab5ce958aa760ef9920277f05e9231fb419f48e7dcc3c4b31ec5de3 not found: ID does not exist" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.907824 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:15 crc kubenswrapper[4860]: E0121 21:34:15.908423 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-kuttl-api-log" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.908444 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-kuttl-api-log" Jan 21 21:34:15 crc kubenswrapper[4860]: E0121 21:34:15.908466 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.908475 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.908658 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-kuttl-api-log" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.908674 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.909906 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.918584 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.922375 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.923002 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 21 21:34:15 crc kubenswrapper[4860]: I0121 21:34:15.923321 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008097 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008176 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008220 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008302 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9f76\" (UniqueName: \"kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008356 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008501 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.008593 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110570 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9f76\" (UniqueName: \"kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110661 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110727 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110760 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110843 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.110882 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.112013 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.116729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.120697 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.122268 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.122421 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.123413 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.135728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9f76\" (UniqueName: \"kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76\") pod \"watcher-kuttl-api-0\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.233801 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.593642 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" path="/var/lib/kubelet/pods/0ce29365-727f-46f1-ba00-6cf789b1cf1f/volumes" Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.775827 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:16 crc kubenswrapper[4860]: I0121 21:34:16.887086 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerStarted","Data":"c46568cb4d056f6db8448b6df3baa576b4da3ed3ce1c6bc01e458f8195349dd1"} Jan 21 21:34:17 crc kubenswrapper[4860]: I0121 21:34:17.929843 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerStarted","Data":"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750"} Jan 21 21:34:17 crc kubenswrapper[4860]: I0121 21:34:17.930646 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:17 crc kubenswrapper[4860]: I0121 21:34:17.930710 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerStarted","Data":"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e"} Jan 21 21:34:17 crc kubenswrapper[4860]: I0121 21:34:17.962662 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.962626811 podStartE2EDuration="2.962626811s" podCreationTimestamp="2026-01-21 21:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:17.953830957 +0000 UTC m=+1550.176009427" watchObservedRunningTime="2026-01-21 21:34:17.962626811 +0000 UTC m=+1550.184805281" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.521682 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673564 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673674 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673767 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673811 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673862 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673919 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92n4n\" (UniqueName: \"kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.673969 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.674072 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data\") pod \"b97e96e7-eb2f-4155-86c3-0b00603728b3\" (UID: \"b97e96e7-eb2f-4155-86c3-0b00603728b3\") " Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.684476 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts" (OuterVolumeSpecName: "scripts") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.685585 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.686086 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.695048 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n" (OuterVolumeSpecName: "kube-api-access-92n4n") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "kube-api-access-92n4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.716135 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.722188 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.766785 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777808 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777864 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777878 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777890 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777903 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92n4n\" (UniqueName: \"kubernetes.io/projected/b97e96e7-eb2f-4155-86c3-0b00603728b3-kube-api-access-92n4n\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.777917 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b97e96e7-eb2f-4155-86c3-0b00603728b3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.791034 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.818180 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data" (OuterVolumeSpecName: "config-data") pod "b97e96e7-eb2f-4155-86c3-0b00603728b3" (UID: "b97e96e7-eb2f-4155-86c3-0b00603728b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.879589 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.879640 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97e96e7-eb2f-4155-86c3-0b00603728b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.943444 4860 generic.go:334] "Generic (PLEG): container finished" podID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerID="7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b" exitCode=0 Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.944673 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.948008 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerDied","Data":"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b"} Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.948052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b97e96e7-eb2f-4155-86c3-0b00603728b3","Type":"ContainerDied","Data":"5e9786cb8ede24db74543f975c5ffaa8c2295df11c8e97c737e5e3f74de9ba9d"} Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.948072 4860 scope.go:117] "RemoveContainer" containerID="429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.978788 4860 scope.go:117] "RemoveContainer" containerID="b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed" Jan 21 21:34:18 crc kubenswrapper[4860]: I0121 21:34:18.995258 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.022042 4860 scope.go:117] "RemoveContainer" containerID="7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.025683 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.054430 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.055067 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="sg-core" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055093 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="sg-core" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.055108 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="proxy-httpd" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055116 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="proxy-httpd" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.055132 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-central-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055142 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-central-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.055160 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-notification-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055170 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-notification-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055394 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="sg-core" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055411 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-central-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055427 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="proxy-httpd" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.055440 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" containerName="ceilometer-notification-agent" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.060439 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.065004 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.066522 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.066725 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.067584 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.077675 4860 scope.go:117] "RemoveContainer" containerID="b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.103728 4860 scope.go:117] "RemoveContainer" containerID="429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.104281 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8\": container with ID starting with 429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8 not found: ID does not exist" containerID="429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.104317 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8"} err="failed to get container status \"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8\": rpc error: code = NotFound desc = could not find container \"429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8\": container with ID starting with 429cdc7df918cf01a84468954112d4b790eae21bf2c0bfab80cbda08be7faeb8 not found: ID does not exist" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.104352 4860 scope.go:117] "RemoveContainer" containerID="b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.104708 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed\": container with ID starting with b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed not found: ID does not exist" containerID="b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.104733 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed"} err="failed to get container status \"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed\": rpc error: code = NotFound desc = could not find container \"b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed\": container with ID starting with b605ec0b59be8c6dadd8dc8950a38624224b66f08b710efc2383d933979391ed not found: ID does not exist" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.105168 4860 scope.go:117] "RemoveContainer" containerID="7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.106149 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b\": container with ID starting with 7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b not found: ID does not exist" containerID="7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.106224 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b"} err="failed to get container status \"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b\": rpc error: code = NotFound desc = could not find container \"7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b\": container with ID starting with 7415fa41731fcdbbee5bbc0160d579783792b99876d330ffdaad870799f3a73b not found: ID does not exist" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.106274 4860 scope.go:117] "RemoveContainer" containerID="b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a" Jan 21 21:34:19 crc kubenswrapper[4860]: E0121 21:34:19.106728 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a\": container with ID starting with b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a not found: ID does not exist" containerID="b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.106761 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a"} err="failed to get container status \"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a\": rpc error: code = NotFound desc = could not find container \"b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a\": container with ID starting with b5950297113891f7a9f04daf7571f86daff91b3d588c4ff712809c4f1aa1d70a not found: ID does not exist" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.190816 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191000 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191039 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191110 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191196 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191234 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.191264 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224z8\" (UniqueName: \"kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.293109 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.293818 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.294695 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.293728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.294786 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.294837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224z8\" (UniqueName: \"kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.295500 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.295890 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.296313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.297403 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.299183 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.299865 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.300708 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.300854 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.302357 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.320387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224z8\" (UniqueName: \"kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8\") pod \"ceilometer-0\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.379290 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.941133 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.964866 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.965101 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-kuttl-api-log" containerID="cri-o://68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e" gracePeriod=30 Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.965227 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerStarted","Data":"2de69738ef0c99909024d6b4b9daf11f2f23c0d846f68dda02f11fffd3e562fd"} Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.965849 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" containerID="cri-o://7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750" gracePeriod=30 Jan 21 21:34:19 crc kubenswrapper[4860]: I0121 21:34:19.993506 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.145:9322/\": EOF" Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.469561 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.143:9322/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.470085 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0ce29365-727f-46f1-ba00-6cf789b1cf1f" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.143:9322/\": context deadline exceeded" Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.590552 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97e96e7-eb2f-4155-86c3-0b00603728b3" path="/var/lib/kubelet/pods/b97e96e7-eb2f-4155-86c3-0b00603728b3/volumes" Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.976908 4860 generic.go:334] "Generic (PLEG): container finished" podID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerID="68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e" exitCode=143 Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.978207 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerDied","Data":"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e"} Jan 21 21:34:20 crc kubenswrapper[4860]: I0121 21:34:20.980442 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerStarted","Data":"9e702bcaa4f43e4d0ac90695c3804438a727cbcb2cfbaee99ae5b629e50c0b33"} Jan 21 21:34:21 crc kubenswrapper[4860]: I0121 21:34:21.234351 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:21 crc kubenswrapper[4860]: I0121 21:34:21.994593 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerStarted","Data":"dd9cb29ea1387b25b7a1c2f700bb921ab8d714be57bb511902aa6f5a8f0f1cdd"} Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.152611 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.145:9322/\": read tcp 10.217.0.2:39076->10.217.0.145:9322: read: connection reset by peer" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.153716 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.145:9322/\": dial tcp 10.217.0.145:9322: connect: connection refused" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.664081 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813028 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9f76\" (UniqueName: \"kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813522 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813591 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813632 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813687 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.813788 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs\") pod \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\" (UID: \"c294d7c9-7e39-4f16-b9fd-7e72c3c51232\") " Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.814697 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs" (OuterVolumeSpecName: "logs") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.823167 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76" (OuterVolumeSpecName: "kube-api-access-g9f76") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "kube-api-access-g9f76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.873995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.874850 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.891495 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.894864 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916787 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916846 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916858 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916900 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9f76\" (UniqueName: \"kubernetes.io/projected/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-kube-api-access-g9f76\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916922 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.916963 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:22 crc kubenswrapper[4860]: I0121 21:34:22.920260 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data" (OuterVolumeSpecName: "config-data") pod "c294d7c9-7e39-4f16-b9fd-7e72c3c51232" (UID: "c294d7c9-7e39-4f16-b9fd-7e72c3c51232"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.011559 4860 generic.go:334] "Generic (PLEG): container finished" podID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerID="7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750" exitCode=0 Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.011694 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerDied","Data":"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750"} Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.011811 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c294d7c9-7e39-4f16-b9fd-7e72c3c51232","Type":"ContainerDied","Data":"c46568cb4d056f6db8448b6df3baa576b4da3ed3ce1c6bc01e458f8195349dd1"} Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.011844 4860 scope.go:117] "RemoveContainer" containerID="7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.011726 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.019183 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c294d7c9-7e39-4f16-b9fd-7e72c3c51232-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.019467 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerStarted","Data":"9a18a4aebb1445a51984b09345d2fa4cb529749251e322304ac2eb0f16a3bc59"} Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.042021 4860 scope.go:117] "RemoveContainer" containerID="68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.065208 4860 scope.go:117] "RemoveContainer" containerID="7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750" Jan 21 21:34:23 crc kubenswrapper[4860]: E0121 21:34:23.066241 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750\": container with ID starting with 7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750 not found: ID does not exist" containerID="7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.066315 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750"} err="failed to get container status \"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750\": rpc error: code = NotFound desc = could not find container \"7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750\": container with ID starting with 7ae4e7c9b9d2d142377478f9d3edee9b4b8b6a6d88549962c1b4f93e791a5750 not found: ID does not exist" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.066359 4860 scope.go:117] "RemoveContainer" containerID="68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e" Jan 21 21:34:23 crc kubenswrapper[4860]: E0121 21:34:23.066841 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e\": container with ID starting with 68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e not found: ID does not exist" containerID="68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.066926 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e"} err="failed to get container status \"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e\": rpc error: code = NotFound desc = could not find container \"68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e\": container with ID starting with 68da3404316319446652db0ab476e2889fc60b7d96a13e7b11a516ac90be543e not found: ID does not exist" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.068591 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.082750 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.094426 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:23 crc kubenswrapper[4860]: E0121 21:34:23.094959 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.094987 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" Jan 21 21:34:23 crc kubenswrapper[4860]: E0121 21:34:23.095034 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-kuttl-api-log" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.095044 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-kuttl-api-log" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.095307 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-kuttl-api-log" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.095336 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" containerName="watcher-api" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.096488 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.100853 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.100905 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.100859 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.137769 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsc4\" (UniqueName: \"kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228716 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228754 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228825 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228860 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.228879 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330483 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330578 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsc4\" (UniqueName: \"kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.330673 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.335630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.340876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.347206 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.347527 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.351961 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.365497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsc4\" (UniqueName: \"kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.366124 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:23 crc kubenswrapper[4860]: I0121 21:34:23.461061 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:24 crc kubenswrapper[4860]: I0121 21:34:24.026446 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:24 crc kubenswrapper[4860]: I0121 21:34:24.041842 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerStarted","Data":"84585503d5331acc2a4a3abfc37e7d38a9fae97c7edd3e6d9d2797906cead350"} Jan 21 21:34:24 crc kubenswrapper[4860]: I0121 21:34:24.042830 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:24 crc kubenswrapper[4860]: I0121 21:34:24.086603 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.754926002 podStartE2EDuration="6.086564263s" podCreationTimestamp="2026-01-21 21:34:18 +0000 UTC" firstStartedPulling="2026-01-21 21:34:19.954140561 +0000 UTC m=+1552.176319031" lastFinishedPulling="2026-01-21 21:34:23.285778822 +0000 UTC m=+1555.507957292" observedRunningTime="2026-01-21 21:34:24.071842869 +0000 UTC m=+1556.294021359" watchObservedRunningTime="2026-01-21 21:34:24.086564263 +0000 UTC m=+1556.308742743" Jan 21 21:34:24 crc kubenswrapper[4860]: I0121 21:34:24.596655 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c294d7c9-7e39-4f16-b9fd-7e72c3c51232" path="/var/lib/kubelet/pods/c294d7c9-7e39-4f16-b9fd-7e72c3c51232/volumes" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.059723 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerStarted","Data":"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa"} Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.059825 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerStarted","Data":"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114"} Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.059855 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerStarted","Data":"5fae1355eab03b07e2ebe39abcafe46150a6ff2ed27759f494a8cc7e2934c60f"} Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.104883 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.104846906 podStartE2EDuration="2.104846906s" podCreationTimestamp="2026-01-21 21:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:25.092335681 +0000 UTC m=+1557.314514171" watchObservedRunningTime="2026-01-21 21:34:25.104846906 +0000 UTC m=+1557.327025386" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.306563 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2ww86"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.328916 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2ww86"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.379647 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.380054 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerName="watcher-applier" containerID="cri-o://63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" gracePeriod=30 Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.448858 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.488496 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher5459-account-delete-7p8z7"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.490317 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.536524 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.536852 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerName="watcher-decision-engine" containerID="cri-o://456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" gracePeriod=30 Jan 21 21:34:25 crc kubenswrapper[4860]: E0121 21:34:25.546390 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.546590 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5459-account-delete-7p8z7"] Jan 21 21:34:25 crc kubenswrapper[4860]: E0121 21:34:25.547739 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:34:25 crc kubenswrapper[4860]: E0121 21:34:25.562867 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:34:25 crc kubenswrapper[4860]: E0121 21:34:25.563040 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerName="watcher-applier" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.640636 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.640741 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpjj\" (UniqueName: \"kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.743027 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpjj\" (UniqueName: \"kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.743321 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.744154 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.769149 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpjj\" (UniqueName: \"kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj\") pod \"watcher5459-account-delete-7p8z7\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:25 crc kubenswrapper[4860]: I0121 21:34:25.830260 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:26 crc kubenswrapper[4860]: I0121 21:34:26.072760 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:26 crc kubenswrapper[4860]: I0121 21:34:26.073254 4860 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-jr6s5\" not found" Jan 21 21:34:26 crc kubenswrapper[4860]: E0121 21:34:26.260527 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:26 crc kubenswrapper[4860]: E0121 21:34:26.260646 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data podName:af9f6846-8757-45f4-b35a-2ebf99baf7fa nodeName:}" failed. No retries permitted until 2026-01-21 21:34:26.760628162 +0000 UTC m=+1558.982806632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data") pod "watcher-kuttl-api-0" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa") : secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:26 crc kubenswrapper[4860]: I0121 21:34:26.467202 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5459-account-delete-7p8z7"] Jan 21 21:34:26 crc kubenswrapper[4860]: I0121 21:34:26.655626 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a468d56c-b296-4927-b2dd-ea4d951ec5bd" path="/var/lib/kubelet/pods/a468d56c-b296-4927-b2dd-ea4d951ec5bd/volumes" Jan 21 21:34:26 crc kubenswrapper[4860]: E0121 21:34:26.783717 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:26 crc kubenswrapper[4860]: E0121 21:34:26.783883 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data podName:af9f6846-8757-45f4-b35a-2ebf99baf7fa nodeName:}" failed. No retries permitted until 2026-01-21 21:34:27.783850669 +0000 UTC m=+1560.006029139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data") pod "watcher-kuttl-api-0" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa") : secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.085415 4860 generic.go:334] "Generic (PLEG): container finished" podID="d16a02b8-af21-4d59-8631-81e58038a823" containerID="86bd2cf47dd7cae3582f59870a214b26a7555ae5ef064b240a003470f9ba2a6a" exitCode=0 Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.085526 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" event={"ID":"d16a02b8-af21-4d59-8631-81e58038a823","Type":"ContainerDied","Data":"86bd2cf47dd7cae3582f59870a214b26a7555ae5ef064b240a003470f9ba2a6a"} Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.085579 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" event={"ID":"d16a02b8-af21-4d59-8631-81e58038a823","Type":"ContainerStarted","Data":"403fd6d58efd448f92749dc863d3a1554e1ff0a46276529691422fcc922c14f0"} Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.085743 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-kuttl-api-log" containerID="cri-o://ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114" gracePeriod=30 Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.085897 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" containerID="cri-o://30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa" gracePeriod=30 Jan 21 21:34:27 crc kubenswrapper[4860]: I0121 21:34:27.090666 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.147:9322/\": EOF" Jan 21 21:34:27 crc kubenswrapper[4860]: E0121 21:34:27.806370 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:27 crc kubenswrapper[4860]: E0121 21:34:27.806923 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data podName:af9f6846-8757-45f4-b35a-2ebf99baf7fa nodeName:}" failed. No retries permitted until 2026-01-21 21:34:29.806899649 +0000 UTC m=+1562.029078119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data") pod "watcher-kuttl-api-0" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa") : secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.096086 4860 generic.go:334] "Generic (PLEG): container finished" podID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerID="ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114" exitCode=143 Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.096319 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerDied","Data":"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114"} Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.461510 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.540156 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.736729 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.737983 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts\") pod \"d16a02b8-af21-4d59-8631-81e58038a823\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle\") pod \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738173 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs\") pod \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738203 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjpjj\" (UniqueName: \"kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj\") pod \"d16a02b8-af21-4d59-8631-81e58038a823\" (UID: \"d16a02b8-af21-4d59-8631-81e58038a823\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738284 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4pz\" (UniqueName: \"kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz\") pod \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738302 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data\") pod \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\" (UID: \"24d2c1f1-9d86-4e72-8728-38bf5bc3c674\") " Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.738905 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs" (OuterVolumeSpecName: "logs") pod "24d2c1f1-9d86-4e72-8728-38bf5bc3c674" (UID: "24d2c1f1-9d86-4e72-8728-38bf5bc3c674"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.739288 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d16a02b8-af21-4d59-8631-81e58038a823" (UID: "d16a02b8-af21-4d59-8631-81e58038a823"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.747163 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz" (OuterVolumeSpecName: "kube-api-access-wk4pz") pod "24d2c1f1-9d86-4e72-8728-38bf5bc3c674" (UID: "24d2c1f1-9d86-4e72-8728-38bf5bc3c674"). InnerVolumeSpecName "kube-api-access-wk4pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.747237 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj" (OuterVolumeSpecName: "kube-api-access-rjpjj") pod "d16a02b8-af21-4d59-8631-81e58038a823" (UID: "d16a02b8-af21-4d59-8631-81e58038a823"). InnerVolumeSpecName "kube-api-access-rjpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.772090 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d2c1f1-9d86-4e72-8728-38bf5bc3c674" (UID: "24d2c1f1-9d86-4e72-8728-38bf5bc3c674"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.799502 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data" (OuterVolumeSpecName: "config-data") pod "24d2c1f1-9d86-4e72-8728-38bf5bc3c674" (UID: "24d2c1f1-9d86-4e72-8728-38bf5bc3c674"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840219 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16a02b8-af21-4d59-8631-81e58038a823-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840266 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840277 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840286 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjpjj\" (UniqueName: \"kubernetes.io/projected/d16a02b8-af21-4d59-8631-81e58038a823-kube-api-access-rjpjj\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840298 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:28 crc kubenswrapper[4860]: I0121 21:34:28.840310 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4pz\" (UniqueName: \"kubernetes.io/projected/24d2c1f1-9d86-4e72-8728-38bf5bc3c674-kube-api-access-wk4pz\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.101867 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.102684 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-central-agent" containerID="cri-o://9e702bcaa4f43e4d0ac90695c3804438a727cbcb2cfbaee99ae5b629e50c0b33" gracePeriod=30 Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.103260 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="proxy-httpd" containerID="cri-o://84585503d5331acc2a4a3abfc37e7d38a9fae97c7edd3e6d9d2797906cead350" gracePeriod=30 Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.103322 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="sg-core" containerID="cri-o://9a18a4aebb1445a51984b09345d2fa4cb529749251e322304ac2eb0f16a3bc59" gracePeriod=30 Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.103359 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-notification-agent" containerID="cri-o://dd9cb29ea1387b25b7a1c2f700bb921ab8d714be57bb511902aa6f5a8f0f1cdd" gracePeriod=30 Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.118579 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" event={"ID":"d16a02b8-af21-4d59-8631-81e58038a823","Type":"ContainerDied","Data":"403fd6d58efd448f92749dc863d3a1554e1ff0a46276529691422fcc922c14f0"} Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.118639 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5459-account-delete-7p8z7" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.118647 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403fd6d58efd448f92749dc863d3a1554e1ff0a46276529691422fcc922c14f0" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.127109 4860 generic.go:334] "Generic (PLEG): container finished" podID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" exitCode=0 Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.127158 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"24d2c1f1-9d86-4e72-8728-38bf5bc3c674","Type":"ContainerDied","Data":"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170"} Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.127274 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.129072 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"24d2c1f1-9d86-4e72-8728-38bf5bc3c674","Type":"ContainerDied","Data":"09ce82f46da9a6b0d4e2fe35451efcd326a12bb96c308a177a43e6a7cc13bc91"} Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.129187 4860 scope.go:117] "RemoveContainer" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.197416 4860 scope.go:117] "RemoveContainer" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" Jan 21 21:34:29 crc kubenswrapper[4860]: E0121 21:34:29.199237 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170\": container with ID starting with 63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170 not found: ID does not exist" containerID="63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.199326 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170"} err="failed to get container status \"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170\": rpc error: code = NotFound desc = could not find container \"63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170\": container with ID starting with 63ea1fe758246a30a681cf71268d1409b821249babfc2473f6131b8385d75170 not found: ID does not exist" Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.224554 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.235550 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:29 crc kubenswrapper[4860]: I0121 21:34:29.838098 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:29 crc kubenswrapper[4860]: E0121 21:34:29.870747 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:29 crc kubenswrapper[4860]: E0121 21:34:29.870859 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data podName:af9f6846-8757-45f4-b35a-2ebf99baf7fa nodeName:}" failed. No retries permitted until 2026-01-21 21:34:33.87083369 +0000 UTC m=+1566.093012150 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data") pod "watcher-kuttl-api-0" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa") : secret "watcher-kuttl-api-config-data" not found Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.139986 4860 generic.go:334] "Generic (PLEG): container finished" podID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerID="84585503d5331acc2a4a3abfc37e7d38a9fae97c7edd3e6d9d2797906cead350" exitCode=0 Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140035 4860 generic.go:334] "Generic (PLEG): container finished" podID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerID="9a18a4aebb1445a51984b09345d2fa4cb529749251e322304ac2eb0f16a3bc59" exitCode=2 Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140048 4860 generic.go:334] "Generic (PLEG): container finished" podID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerID="dd9cb29ea1387b25b7a1c2f700bb921ab8d714be57bb511902aa6f5a8f0f1cdd" exitCode=0 Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140061 4860 generic.go:334] "Generic (PLEG): container finished" podID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerID="9e702bcaa4f43e4d0ac90695c3804438a727cbcb2cfbaee99ae5b629e50c0b33" exitCode=0 Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140089 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerDied","Data":"84585503d5331acc2a4a3abfc37e7d38a9fae97c7edd3e6d9d2797906cead350"} Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140165 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerDied","Data":"9a18a4aebb1445a51984b09345d2fa4cb529749251e322304ac2eb0f16a3bc59"} Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerDied","Data":"dd9cb29ea1387b25b7a1c2f700bb921ab8d714be57bb511902aa6f5a8f0f1cdd"} Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.140188 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerDied","Data":"9e702bcaa4f43e4d0ac90695c3804438a727cbcb2cfbaee99ae5b629e50c0b33"} Jan 21 21:34:30 crc kubenswrapper[4860]: E0121 21:34:30.427682 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:34:30 crc kubenswrapper[4860]: E0121 21:34:30.432364 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:34:30 crc kubenswrapper[4860]: E0121 21:34:30.434250 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:34:30 crc kubenswrapper[4860]: E0121 21:34:30.434296 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerName="watcher-decision-engine" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.480756 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rx9wf"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.504730 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rx9wf"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.518928 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-5459-account-create-update-sfqxg"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.528647 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher5459-account-delete-7p8z7"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.538512 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-5459-account-create-update-sfqxg"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.547286 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher5459-account-delete-7p8z7"] Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.576912 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.597866 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" path="/var/lib/kubelet/pods/24d2c1f1-9d86-4e72-8728-38bf5bc3c674/volumes" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.598540 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358ec7aa-66f1-47a5-ae23-de77490acea4" path="/var/lib/kubelet/pods/358ec7aa-66f1-47a5-ae23-de77490acea4/volumes" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.599151 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16a02b8-af21-4d59-8631-81e58038a823" path="/var/lib/kubelet/pods/d16a02b8-af21-4d59-8631-81e58038a823/volumes" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.600711 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c99a2e-dff6-49dd-8fdc-c69654d8fa67" path="/var/lib/kubelet/pods/e2c99a2e-dff6-49dd-8fdc-c69654d8fa67/volumes" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690400 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690454 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690488 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690522 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690561 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690691 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690712 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.690758 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224z8\" (UniqueName: \"kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8\") pod \"231b0585-8337-4c03-a70c-0075aa6bad1c\" (UID: \"231b0585-8337-4c03-a70c-0075aa6bad1c\") " Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.691335 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.693262 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.704173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts" (OuterVolumeSpecName: "scripts") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.704518 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8" (OuterVolumeSpecName: "kube-api-access-224z8") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "kube-api-access-224z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.723805 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.749327 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.774790 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.788204 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data" (OuterVolumeSpecName: "config-data") pod "231b0585-8337-4c03-a70c-0075aa6bad1c" (UID: "231b0585-8337-4c03-a70c-0075aa6bad1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792678 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792725 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792742 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792756 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/231b0585-8337-4c03-a70c-0075aa6bad1c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792769 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792784 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792796 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231b0585-8337-4c03-a70c-0075aa6bad1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:30 crc kubenswrapper[4860]: I0121 21:34:30.792808 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-224z8\" (UniqueName: \"kubernetes.io/projected/231b0585-8337-4c03-a70c-0075aa6bad1c-kube-api-access-224z8\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.154411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"231b0585-8337-4c03-a70c-0075aa6bad1c","Type":"ContainerDied","Data":"2de69738ef0c99909024d6b4b9daf11f2f23c0d846f68dda02f11fffd3e562fd"} Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.154480 4860 scope.go:117] "RemoveContainer" containerID="84585503d5331acc2a4a3abfc37e7d38a9fae97c7edd3e6d9d2797906cead350" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.154532 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.186670 4860 scope.go:117] "RemoveContainer" containerID="9a18a4aebb1445a51984b09345d2fa4cb529749251e322304ac2eb0f16a3bc59" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.199406 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.206970 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.224752 4860 scope.go:117] "RemoveContainer" containerID="dd9cb29ea1387b25b7a1c2f700bb921ab8d714be57bb511902aa6f5a8f0f1cdd" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.235532 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.237124 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-central-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.237282 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-central-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.237363 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerName="watcher-applier" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.237418 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerName="watcher-applier" Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.239685 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a02b8-af21-4d59-8631-81e58038a823" containerName="mariadb-account-delete" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.239832 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a02b8-af21-4d59-8631-81e58038a823" containerName="mariadb-account-delete" Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.239982 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-notification-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.240051 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-notification-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.240127 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="sg-core" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.240188 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="sg-core" Jan 21 21:34:31 crc kubenswrapper[4860]: E0121 21:34:31.240259 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="proxy-httpd" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.240312 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="proxy-httpd" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.244435 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-notification-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.244720 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d2c1f1-9d86-4e72-8728-38bf5bc3c674" containerName="watcher-applier" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.244806 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16a02b8-af21-4d59-8631-81e58038a823" containerName="mariadb-account-delete" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.244886 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="sg-core" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.244984 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="ceilometer-central-agent" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.245116 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" containerName="proxy-httpd" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.247150 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.251700 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.255667 4860 scope.go:117] "RemoveContainer" containerID="9e702bcaa4f43e4d0ac90695c3804438a727cbcb2cfbaee99ae5b629e50c0b33" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.256414 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.256539 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.256498 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.281788 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.147:9322/\": read tcp 10.217.0.2:55282->10.217.0.147:9322: read: connection reset by peer" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405423 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405713 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405765 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbp7l\" (UniqueName: \"kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405811 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405837 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405864 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.405904 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507533 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507603 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507627 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbp7l\" (UniqueName: \"kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507658 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507674 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507692 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.507713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.508630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.508985 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.514325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.516680 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.520681 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.523288 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.524537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.532930 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbp7l\" (UniqueName: \"kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l\") pod \"ceilometer-0\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.577307 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.739790 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915070 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915282 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915443 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsc4\" (UniqueName: \"kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915470 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915563 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915590 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.915655 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data\") pod \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\" (UID: \"af9f6846-8757-45f4-b35a-2ebf99baf7fa\") " Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.916656 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs" (OuterVolumeSpecName: "logs") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.923327 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4" (OuterVolumeSpecName: "kube-api-access-kwsc4") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "kube-api-access-kwsc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.950173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.963211 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.971035 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.972875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:31 crc kubenswrapper[4860]: I0121 21:34:31.978432 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data" (OuterVolumeSpecName: "config-data") pod "af9f6846-8757-45f4-b35a-2ebf99baf7fa" (UID: "af9f6846-8757-45f4-b35a-2ebf99baf7fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018432 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsc4\" (UniqueName: \"kubernetes.io/projected/af9f6846-8757-45f4-b35a-2ebf99baf7fa-kube-api-access-kwsc4\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018484 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018495 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9f6846-8757-45f4-b35a-2ebf99baf7fa-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018506 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018514 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018523 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.018533 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9f6846-8757-45f4-b35a-2ebf99baf7fa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.072124 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.103809 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.104146 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.104332 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.105802 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.106067 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" gracePeriod=600 Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.176080 4860 generic.go:334] "Generic (PLEG): container finished" podID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerID="30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa" exitCode=0 Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.176219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerDied","Data":"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa"} Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.176333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"af9f6846-8757-45f4-b35a-2ebf99baf7fa","Type":"ContainerDied","Data":"5fae1355eab03b07e2ebe39abcafe46150a6ff2ed27759f494a8cc7e2934c60f"} Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.176362 4860 scope.go:117] "RemoveContainer" containerID="30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.177455 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.177540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerStarted","Data":"d87d78fa92c2e272db731932c6bcad0da6d1148c740353c48544adc9b49e1bf6"} Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.218573 4860 scope.go:117] "RemoveContainer" containerID="ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.242435 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:32 crc kubenswrapper[4860]: E0121 21:34:32.245247 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.258558 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.263940 4860 scope.go:117] "RemoveContainer" containerID="30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa" Jan 21 21:34:32 crc kubenswrapper[4860]: E0121 21:34:32.264904 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa\": container with ID starting with 30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa not found: ID does not exist" containerID="30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.264968 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa"} err="failed to get container status \"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa\": rpc error: code = NotFound desc = could not find container \"30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa\": container with ID starting with 30f99560656b54bda21136f5298d12e45e5d9f4f9791854ca2c77d11ce3bacfa not found: ID does not exist" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.265005 4860 scope.go:117] "RemoveContainer" containerID="ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114" Jan 21 21:34:32 crc kubenswrapper[4860]: E0121 21:34:32.283159 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114\": container with ID starting with ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114 not found: ID does not exist" containerID="ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.283240 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114"} err="failed to get container status \"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114\": rpc error: code = NotFound desc = could not find container \"ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114\": container with ID starting with ec31990e261961d868f1b44ce889012277853d2097a111f13ab7323e68b33114 not found: ID does not exist" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.611231 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231b0585-8337-4c03-a70c-0075aa6bad1c" path="/var/lib/kubelet/pods/231b0585-8337-4c03-a70c-0075aa6bad1c/volumes" Jan 21 21:34:32 crc kubenswrapper[4860]: I0121 21:34:32.613338 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" path="/var/lib/kubelet/pods/af9f6846-8757-45f4-b35a-2ebf99baf7fa/volumes" Jan 21 21:34:33 crc kubenswrapper[4860]: I0121 21:34:33.189033 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerStarted","Data":"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9"} Jan 21 21:34:33 crc kubenswrapper[4860]: I0121 21:34:33.191729 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" exitCode=0 Jan 21 21:34:33 crc kubenswrapper[4860]: I0121 21:34:33.191818 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97"} Jan 21 21:34:33 crc kubenswrapper[4860]: I0121 21:34:33.191895 4860 scope.go:117] "RemoveContainer" containerID="6a1026d7df8e6decaf8dcd0187c59fd31bbfa3791da6287908484db6b5520da6" Jan 21 21:34:33 crc kubenswrapper[4860]: I0121 21:34:33.193032 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:34:33 crc kubenswrapper[4860]: E0121 21:34:33.193630 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.225728 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.226750 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerStarted","Data":"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1"} Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.226808 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerStarted","Data":"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48"} Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.239432 4860 generic.go:334] "Generic (PLEG): container finished" podID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerID="456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" exitCode=0 Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.239502 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1938d0be-9281-4d7a-b9b1-84c5844428bd","Type":"ContainerDied","Data":"456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319"} Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.239557 4860 scope.go:117] "RemoveContainer" containerID="456c7ff79ee0fcbdc9fb3137a793fcfcab58d8493603b8831a3a1815d5607319" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.239589 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.368202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs\") pod \"1938d0be-9281-4d7a-b9b1-84c5844428bd\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.368341 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data\") pod \"1938d0be-9281-4d7a-b9b1-84c5844428bd\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.368388 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle\") pod \"1938d0be-9281-4d7a-b9b1-84c5844428bd\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.368513 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pscq\" (UniqueName: \"kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq\") pod \"1938d0be-9281-4d7a-b9b1-84c5844428bd\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.368923 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs" (OuterVolumeSpecName: "logs") pod "1938d0be-9281-4d7a-b9b1-84c5844428bd" (UID: "1938d0be-9281-4d7a-b9b1-84c5844428bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.369573 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca\") pod \"1938d0be-9281-4d7a-b9b1-84c5844428bd\" (UID: \"1938d0be-9281-4d7a-b9b1-84c5844428bd\") " Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.370424 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1938d0be-9281-4d7a-b9b1-84c5844428bd-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.375511 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq" (OuterVolumeSpecName: "kube-api-access-6pscq") pod "1938d0be-9281-4d7a-b9b1-84c5844428bd" (UID: "1938d0be-9281-4d7a-b9b1-84c5844428bd"). InnerVolumeSpecName "kube-api-access-6pscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.399367 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1938d0be-9281-4d7a-b9b1-84c5844428bd" (UID: "1938d0be-9281-4d7a-b9b1-84c5844428bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.400993 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1938d0be-9281-4d7a-b9b1-84c5844428bd" (UID: "1938d0be-9281-4d7a-b9b1-84c5844428bd"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.418536 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data" (OuterVolumeSpecName: "config-data") pod "1938d0be-9281-4d7a-b9b1-84c5844428bd" (UID: "1938d0be-9281-4d7a-b9b1-84c5844428bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.473136 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.473199 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.473223 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pscq\" (UniqueName: \"kubernetes.io/projected/1938d0be-9281-4d7a-b9b1-84c5844428bd-kube-api-access-6pscq\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.473244 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1938d0be-9281-4d7a-b9b1-84c5844428bd-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.593002 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:34 crc kubenswrapper[4860]: I0121 21:34:34.593068 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.195353 4860 scope.go:117] "RemoveContainer" containerID="49f77acde81e40177fd99669a47889a0d09d86fcf259e61dbaecffb400f25edd" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.493230 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-cdv2f"] Jan 21 21:34:35 crc kubenswrapper[4860]: E0121 21:34:35.493825 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerName="watcher-decision-engine" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.493846 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerName="watcher-decision-engine" Jan 21 21:34:35 crc kubenswrapper[4860]: E0121 21:34:35.493863 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-kuttl-api-log" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.493872 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-kuttl-api-log" Jan 21 21:34:35 crc kubenswrapper[4860]: E0121 21:34:35.493906 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.493915 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.494171 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-kuttl-api-log" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.494188 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9f6846-8757-45f4-b35a-2ebf99baf7fa" containerName="watcher-api" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.494213 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" containerName="watcher-decision-engine" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.494948 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.517861 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8"] Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.519653 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.524520 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.559089 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-cdv2f"] Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.613025 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.613099 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9v7\" (UniqueName: \"kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.626696 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8"] Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.735221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.735309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.735356 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls9v7\" (UniqueName: \"kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.735380 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wdj\" (UniqueName: \"kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.736647 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.822230 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls9v7\" (UniqueName: \"kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7\") pod \"watcher-db-create-cdv2f\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.843447 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.843531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wdj\" (UniqueName: \"kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.844543 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.862554 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.870057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wdj\" (UniqueName: \"kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj\") pod \"watcher-4ccb-account-create-update-59jc8\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:35 crc kubenswrapper[4860]: I0121 21:34:35.886585 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.282502 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerStarted","Data":"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9"} Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.283612 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.311848 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.208082204 podStartE2EDuration="5.311809211s" podCreationTimestamp="2026-01-21 21:34:31 +0000 UTC" firstStartedPulling="2026-01-21 21:34:32.074454211 +0000 UTC m=+1564.296632681" lastFinishedPulling="2026-01-21 21:34:35.178181218 +0000 UTC m=+1567.400359688" observedRunningTime="2026-01-21 21:34:36.305913439 +0000 UTC m=+1568.528091909" watchObservedRunningTime="2026-01-21 21:34:36.311809211 +0000 UTC m=+1568.533987681" Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.504187 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-cdv2f"] Jan 21 21:34:36 crc kubenswrapper[4860]: W0121 21:34:36.511412 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbf5887_97be_4c75_8de2_81b1484db9f5.slice/crio-9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1 WatchSource:0}: Error finding container 9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1: Status 404 returned error can't find the container with id 9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1 Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.607783 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1938d0be-9281-4d7a-b9b1-84c5844428bd" path="/var/lib/kubelet/pods/1938d0be-9281-4d7a-b9b1-84c5844428bd/volumes" Jan 21 21:34:36 crc kubenswrapper[4860]: I0121 21:34:36.664869 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8"] Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.298224 4860 generic.go:334] "Generic (PLEG): container finished" podID="4fbf5887-97be-4c75-8de2-81b1484db9f5" containerID="2ade1f595062cbadefc1775970b79d488ebd925d2f0716232dad6693c0d1f1fc" exitCode=0 Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.298382 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-cdv2f" event={"ID":"4fbf5887-97be-4c75-8de2-81b1484db9f5","Type":"ContainerDied","Data":"2ade1f595062cbadefc1775970b79d488ebd925d2f0716232dad6693c0d1f1fc"} Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.298418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-cdv2f" event={"ID":"4fbf5887-97be-4c75-8de2-81b1484db9f5","Type":"ContainerStarted","Data":"9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1"} Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.304868 4860 generic.go:334] "Generic (PLEG): container finished" podID="1d3d99bd-a6ee-4d83-b601-41d63c5d408c" containerID="a886018558cdff59514cfc207e4da819eec8f58d00003200d04086e0558f197d" exitCode=0 Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.306231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" event={"ID":"1d3d99bd-a6ee-4d83-b601-41d63c5d408c","Type":"ContainerDied","Data":"a886018558cdff59514cfc207e4da819eec8f58d00003200d04086e0558f197d"} Jan 21 21:34:37 crc kubenswrapper[4860]: I0121 21:34:37.306280 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" event={"ID":"1d3d99bd-a6ee-4d83-b601-41d63c5d408c","Type":"ContainerStarted","Data":"8d79b29733e0b5f06f3c1565348c5de6ae3901ecc7c6587cba8ab4e902716f7c"} Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.842296 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.854661 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.912599 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77wdj\" (UniqueName: \"kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj\") pod \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.912647 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls9v7\" (UniqueName: \"kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7\") pod \"4fbf5887-97be-4c75-8de2-81b1484db9f5\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.912686 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts\") pod \"4fbf5887-97be-4c75-8de2-81b1484db9f5\" (UID: \"4fbf5887-97be-4c75-8de2-81b1484db9f5\") " Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.912734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts\") pod \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\" (UID: \"1d3d99bd-a6ee-4d83-b601-41d63c5d408c\") " Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.913653 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fbf5887-97be-4c75-8de2-81b1484db9f5" (UID: "4fbf5887-97be-4c75-8de2-81b1484db9f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.914072 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d3d99bd-a6ee-4d83-b601-41d63c5d408c" (UID: "1d3d99bd-a6ee-4d83-b601-41d63c5d408c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.922353 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7" (OuterVolumeSpecName: "kube-api-access-ls9v7") pod "4fbf5887-97be-4c75-8de2-81b1484db9f5" (UID: "4fbf5887-97be-4c75-8de2-81b1484db9f5"). InnerVolumeSpecName "kube-api-access-ls9v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:38 crc kubenswrapper[4860]: I0121 21:34:38.944862 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj" (OuterVolumeSpecName: "kube-api-access-77wdj") pod "1d3d99bd-a6ee-4d83-b601-41d63c5d408c" (UID: "1d3d99bd-a6ee-4d83-b601-41d63c5d408c"). InnerVolumeSpecName "kube-api-access-77wdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.015807 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77wdj\" (UniqueName: \"kubernetes.io/projected/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-kube-api-access-77wdj\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.015870 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls9v7\" (UniqueName: \"kubernetes.io/projected/4fbf5887-97be-4c75-8de2-81b1484db9f5-kube-api-access-ls9v7\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.015882 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbf5887-97be-4c75-8de2-81b1484db9f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.015895 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3d99bd-a6ee-4d83-b601-41d63c5d408c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.326105 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" event={"ID":"1d3d99bd-a6ee-4d83-b601-41d63c5d408c","Type":"ContainerDied","Data":"8d79b29733e0b5f06f3c1565348c5de6ae3901ecc7c6587cba8ab4e902716f7c"} Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.326433 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d79b29733e0b5f06f3c1565348c5de6ae3901ecc7c6587cba8ab4e902716f7c" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.326498 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.335197 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-cdv2f" event={"ID":"4fbf5887-97be-4c75-8de2-81b1484db9f5","Type":"ContainerDied","Data":"9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1"} Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.335235 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e82ea7133dbea4bfd7e92d9cbbe08465ae37d36d2542432f1808e32b25269a1" Jan 21 21:34:39 crc kubenswrapper[4860]: I0121 21:34:39.335280 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-cdv2f" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.839843 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs"] Jan 21 21:34:40 crc kubenswrapper[4860]: E0121 21:34:40.841948 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbf5887-97be-4c75-8de2-81b1484db9f5" containerName="mariadb-database-create" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.842067 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbf5887-97be-4c75-8de2-81b1484db9f5" containerName="mariadb-database-create" Jan 21 21:34:40 crc kubenswrapper[4860]: E0121 21:34:40.842141 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d99bd-a6ee-4d83-b601-41d63c5d408c" containerName="mariadb-account-create-update" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.842204 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d99bd-a6ee-4d83-b601-41d63c5d408c" containerName="mariadb-account-create-update" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.842452 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbf5887-97be-4c75-8de2-81b1484db9f5" containerName="mariadb-database-create" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.842520 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d99bd-a6ee-4d83-b601-41d63c5d408c" containerName="mariadb-account-create-update" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.843246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.847117 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6t4dd" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.847385 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.857489 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs"] Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.873162 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pkl\" (UniqueName: \"kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.873473 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.873619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.873725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.976415 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.976475 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.976562 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pkl\" (UniqueName: \"kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:40 crc kubenswrapper[4860]: I0121 21:34:40.976596 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:40.994290 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:40.994381 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:40.995175 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:41.005174 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pkl\" (UniqueName: \"kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl\") pod \"watcher-kuttl-db-sync-bhqxs\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:41.177178 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:41 crc kubenswrapper[4860]: I0121 21:34:41.697776 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs"] Jan 21 21:34:42 crc kubenswrapper[4860]: I0121 21:34:42.393440 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" event={"ID":"71b92928-b56a-4621-8959-594cd055b50b","Type":"ContainerStarted","Data":"b5084912036c77078d058de68911d6ad2f6c077af202d00e0507d3380ed0b59f"} Jan 21 21:34:42 crc kubenswrapper[4860]: I0121 21:34:42.393506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" event={"ID":"71b92928-b56a-4621-8959-594cd055b50b","Type":"ContainerStarted","Data":"0fb778e1c8f5a1d040a41fa5f1496331423074a44fca75a42ecb4d3e8d4908ee"} Jan 21 21:34:42 crc kubenswrapper[4860]: I0121 21:34:42.410358 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" podStartSLOduration=2.410323684 podStartE2EDuration="2.410323684s" podCreationTimestamp="2026-01-21 21:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:42.408457186 +0000 UTC m=+1574.630635666" watchObservedRunningTime="2026-01-21 21:34:42.410323684 +0000 UTC m=+1574.632502154" Jan 21 21:34:46 crc kubenswrapper[4860]: I0121 21:34:46.436778 4860 generic.go:334] "Generic (PLEG): container finished" podID="71b92928-b56a-4621-8959-594cd055b50b" containerID="b5084912036c77078d058de68911d6ad2f6c077af202d00e0507d3380ed0b59f" exitCode=0 Jan 21 21:34:46 crc kubenswrapper[4860]: I0121 21:34:46.437777 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" event={"ID":"71b92928-b56a-4621-8959-594cd055b50b","Type":"ContainerDied","Data":"b5084912036c77078d058de68911d6ad2f6c077af202d00e0507d3380ed0b59f"} Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.068761 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.188303 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data\") pod \"71b92928-b56a-4621-8959-594cd055b50b\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.188376 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle\") pod \"71b92928-b56a-4621-8959-594cd055b50b\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.188448 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pkl\" (UniqueName: \"kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl\") pod \"71b92928-b56a-4621-8959-594cd055b50b\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.188506 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data\") pod \"71b92928-b56a-4621-8959-594cd055b50b\" (UID: \"71b92928-b56a-4621-8959-594cd055b50b\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.198121 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "71b92928-b56a-4621-8959-594cd055b50b" (UID: "71b92928-b56a-4621-8959-594cd055b50b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.208514 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl" (OuterVolumeSpecName: "kube-api-access-r2pkl") pod "71b92928-b56a-4621-8959-594cd055b50b" (UID: "71b92928-b56a-4621-8959-594cd055b50b"). InnerVolumeSpecName "kube-api-access-r2pkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.220808 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b92928-b56a-4621-8959-594cd055b50b" (UID: "71b92928-b56a-4621-8959-594cd055b50b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.241173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data" (OuterVolumeSpecName: "config-data") pod "71b92928-b56a-4621-8959-594cd055b50b" (UID: "71b92928-b56a-4621-8959-594cd055b50b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.304035 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.304154 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.304174 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2pkl\" (UniqueName: \"kubernetes.io/projected/71b92928-b56a-4621-8959-594cd055b50b-kube-api-access-r2pkl\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.304193 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b92928-b56a-4621-8959-594cd055b50b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.460353 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" event={"ID":"71b92928-b56a-4621-8959-594cd055b50b","Type":"ContainerDied","Data":"0fb778e1c8f5a1d040a41fa5f1496331423074a44fca75a42ecb4d3e8d4908ee"} Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.460412 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb778e1c8f5a1d040a41fa5f1496331423074a44fca75a42ecb4d3e8d4908ee" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.460512 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.523511 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/crc-debug-zr7qh"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.524195 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/crc-debug-zr7qh" podUID="a28ae956-41bc-4160-8edc-f40247e5612d" containerName="container-00" containerID="cri-o://5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd" gracePeriod=2 Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.533040 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/crc-debug-zr7qh"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.584348 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:34:48 crc kubenswrapper[4860]: E0121 21:34:48.584781 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.586739 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.710116 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khdvl\" (UniqueName: \"kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl\") pod \"a28ae956-41bc-4160-8edc-f40247e5612d\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.710202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host\") pod \"a28ae956-41bc-4160-8edc-f40247e5612d\" (UID: \"a28ae956-41bc-4160-8edc-f40247e5612d\") " Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.710356 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host" (OuterVolumeSpecName: "host") pod "a28ae956-41bc-4160-8edc-f40247e5612d" (UID: "a28ae956-41bc-4160-8edc-f40247e5612d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.710658 4860 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a28ae956-41bc-4160-8edc-f40247e5612d-host\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.715192 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl" (OuterVolumeSpecName: "kube-api-access-khdvl") pod "a28ae956-41bc-4160-8edc-f40247e5612d" (UID: "a28ae956-41bc-4160-8edc-f40247e5612d"). InnerVolumeSpecName "kube-api-access-khdvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.793297 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: E0121 21:34:48.793731 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28ae956-41bc-4160-8edc-f40247e5612d" containerName="container-00" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.793750 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28ae956-41bc-4160-8edc-f40247e5612d" containerName="container-00" Jan 21 21:34:48 crc kubenswrapper[4860]: E0121 21:34:48.793777 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b92928-b56a-4621-8959-594cd055b50b" containerName="watcher-kuttl-db-sync" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.793786 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b92928-b56a-4621-8959-594cd055b50b" containerName="watcher-kuttl-db-sync" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.793981 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b92928-b56a-4621-8959-594cd055b50b" containerName="watcher-kuttl-db-sync" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.793998 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28ae956-41bc-4160-8edc-f40247e5612d" containerName="container-00" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.794976 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.801049 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.805629 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.812004 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.812781 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khdvl\" (UniqueName: \"kubernetes.io/projected/a28ae956-41bc-4160-8edc-f40247e5612d-kube-api-access-khdvl\") on node \"crc\" DevicePath \"\"" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.813658 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.823355 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.824224 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.828399 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.828779 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6t4dd" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.844915 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.868029 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.869606 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.879762 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.897296 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919216 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919250 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919287 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6lg\" (UniqueName: \"kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919336 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919374 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919403 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919442 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsp5f\" (UniqueName: \"kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919460 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:48 crc kubenswrapper[4860]: I0121 21:34:48.919504 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.020916 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.020991 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021029 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021053 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rlp\" (UniqueName: \"kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021082 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021128 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsp5f\" (UniqueName: \"kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021169 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021201 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021237 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021276 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021331 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021361 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021387 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021426 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6lg\" (UniqueName: \"kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.021823 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.024062 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.027257 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.027966 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.030070 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.034528 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.035373 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.035617 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.036339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.050620 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6lg\" (UniqueName: \"kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg\") pod \"watcher-kuttl-applier-0\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.055255 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsp5f\" (UniqueName: \"kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f\") pod \"watcher-kuttl-api-0\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.119417 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.122483 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rlp\" (UniqueName: \"kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.122551 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.122579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.122623 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.122695 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.123306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.128143 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.129011 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.129124 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.136927 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.147076 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rlp\" (UniqueName: \"kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.197655 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.473975 4860 generic.go:334] "Generic (PLEG): container finished" podID="a28ae956-41bc-4160-8edc-f40247e5612d" containerID="5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd" exitCode=130 Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.474374 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/crc-debug-zr7qh" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.474384 4860 scope.go:117] "RemoveContainer" containerID="5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.515360 4860 scope.go:117] "RemoveContainer" containerID="5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd" Jan 21 21:34:49 crc kubenswrapper[4860]: E0121 21:34:49.516408 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd\": container with ID starting with 5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd not found: ID does not exist" containerID="5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.516469 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd"} err="failed to get container status \"5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd\": rpc error: code = NotFound desc = could not find container \"5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd\": container with ID starting with 5c7082efd5579f090deb9ef6d73bc2074803db2bd9c00239853a9edb633100cd not found: ID does not exist" Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.705979 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:34:49 crc kubenswrapper[4860]: W0121 21:34:49.723405 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd5543d_fe4d_4440_b911_0832adcc8eef.slice/crio-5bced3ba1c118b060d071943b0165fb48065d79c417f8d2e61b05b8510155ce8 WatchSource:0}: Error finding container 5bced3ba1c118b060d071943b0165fb48065d79c417f8d2e61b05b8510155ce8: Status 404 returned error can't find the container with id 5bced3ba1c118b060d071943b0165fb48065d79c417f8d2e61b05b8510155ce8 Jan 21 21:34:49 crc kubenswrapper[4860]: I0121 21:34:49.853644 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.232564 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:34:50 crc kubenswrapper[4860]: W0121 21:34:50.270163 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ca4965_593d_4341_9e60_fc065881b3de.slice/crio-2a9e7f5ac56d60b5130899f83811ad6ec328538554b42f523ace383d21be283b WatchSource:0}: Error finding container 2a9e7f5ac56d60b5130899f83811ad6ec328538554b42f523ace383d21be283b: Status 404 returned error can't find the container with id 2a9e7f5ac56d60b5130899f83811ad6ec328538554b42f523ace383d21be283b Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.503208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d0d62723-f343-4381-980d-b1600505269a","Type":"ContainerStarted","Data":"90abb0fb92ea72e4f3aa511ce8a44df1de5039e8b6e174de91c805589fbf50a9"} Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.505517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e4ca4965-593d-4341-9e60-fc065881b3de","Type":"ContainerStarted","Data":"2a9e7f5ac56d60b5130899f83811ad6ec328538554b42f523ace383d21be283b"} Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.511920 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerStarted","Data":"9b5672234289613d834d093a5dce726f01817a402849d5391d89277beb66ca27"} Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.512003 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerStarted","Data":"5bced3ba1c118b060d071943b0165fb48065d79c417f8d2e61b05b8510155ce8"} Jan 21 21:34:50 crc kubenswrapper[4860]: I0121 21:34:50.595842 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28ae956-41bc-4160-8edc-f40247e5612d" path="/var/lib/kubelet/pods/a28ae956-41bc-4160-8edc-f40247e5612d/volumes" Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.527664 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerStarted","Data":"a9b10c3385524f9284fe55269a3ab06859dec5ee860896e057ac2a550bc24562"} Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.528305 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.529747 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d0d62723-f343-4381-980d-b1600505269a","Type":"ContainerStarted","Data":"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d"} Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.531550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e4ca4965-593d-4341-9e60-fc065881b3de","Type":"ContainerStarted","Data":"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518"} Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.561909 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.561884931 podStartE2EDuration="3.561884931s" podCreationTimestamp="2026-01-21 21:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:51.558206717 +0000 UTC m=+1583.780385187" watchObservedRunningTime="2026-01-21 21:34:51.561884931 +0000 UTC m=+1583.784063401" Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.589497 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.589466522 podStartE2EDuration="3.589466522s" podCreationTimestamp="2026-01-21 21:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:51.581092504 +0000 UTC m=+1583.803271004" watchObservedRunningTime="2026-01-21 21:34:51.589466522 +0000 UTC m=+1583.811644992" Jan 21 21:34:51 crc kubenswrapper[4860]: I0121 21:34:51.608494 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.608456038 podStartE2EDuration="3.608456038s" podCreationTimestamp="2026-01-21 21:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:34:51.606299882 +0000 UTC m=+1583.828478352" watchObservedRunningTime="2026-01-21 21:34:51.608456038 +0000 UTC m=+1583.830634518" Jan 21 21:34:54 crc kubenswrapper[4860]: I0121 21:34:54.121720 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:54 crc kubenswrapper[4860]: I0121 21:34:54.124101 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:34:54 crc kubenswrapper[4860]: I0121 21:34:54.137889 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:54 crc kubenswrapper[4860]: I0121 21:34:54.261468 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.121256 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.137620 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.137832 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.188352 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.199163 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.233826 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.645226 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.685624 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.849974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:34:59 crc kubenswrapper[4860]: I0121 21:34:59.873808 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.831085 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.833860 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.852601 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.925403 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.925566 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:00 crc kubenswrapper[4860]: I0121 21:35:00.926069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfg9\" (UniqueName: \"kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.028033 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gfg9\" (UniqueName: \"kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.028153 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.028310 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.028843 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.029032 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.056536 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gfg9\" (UniqueName: \"kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9\") pod \"certified-operators-r4ppf\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.182171 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.631432 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:01 crc kubenswrapper[4860]: I0121 21:35:01.999747 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.581408 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:35:02 crc kubenswrapper[4860]: E0121 21:35:02.582285 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.713813 4860 generic.go:334] "Generic (PLEG): container finished" podID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerID="78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66" exitCode=0 Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.713922 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerDied","Data":"78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66"} Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.714003 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerStarted","Data":"13d584cef3506b4d9b7d66ac34f18ce2cc1cdca2a295b4ca08f2c03d68a99c5e"} Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.878352 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.878747 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-central-agent" containerID="cri-o://e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9" gracePeriod=30 Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.879493 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="proxy-httpd" containerID="cri-o://31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9" gracePeriod=30 Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.879560 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="sg-core" containerID="cri-o://218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1" gracePeriod=30 Jan 21 21:35:02 crc kubenswrapper[4860]: I0121 21:35:02.879606 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-notification-agent" containerID="cri-o://7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48" gracePeriod=30 Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.803582 4860 generic.go:334] "Generic (PLEG): container finished" podID="142126d0-df41-40a8-abe2-00359a595e88" containerID="31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9" exitCode=0 Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.804044 4860 generic.go:334] "Generic (PLEG): container finished" podID="142126d0-df41-40a8-abe2-00359a595e88" containerID="218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1" exitCode=2 Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.804053 4860 generic.go:334] "Generic (PLEG): container finished" podID="142126d0-df41-40a8-abe2-00359a595e88" containerID="e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9" exitCode=0 Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.804083 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerDied","Data":"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9"} Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.804123 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerDied","Data":"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1"} Jan 21 21:35:03 crc kubenswrapper[4860]: I0121 21:35:03.804139 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerDied","Data":"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9"} Jan 21 21:35:04 crc kubenswrapper[4860]: I0121 21:35:04.817452 4860 generic.go:334] "Generic (PLEG): container finished" podID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerID="da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511" exitCode=0 Jan 21 21:35:04 crc kubenswrapper[4860]: I0121 21:35:04.817671 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerDied","Data":"da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511"} Jan 21 21:35:05 crc kubenswrapper[4860]: I0121 21:35:05.832758 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerStarted","Data":"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74"} Jan 21 21:35:05 crc kubenswrapper[4860]: I0121 21:35:05.863191 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4ppf" podStartSLOduration=3.285725776 podStartE2EDuration="5.863126692s" podCreationTimestamp="2026-01-21 21:35:00 +0000 UTC" firstStartedPulling="2026-01-21 21:35:02.716649234 +0000 UTC m=+1594.938827704" lastFinishedPulling="2026-01-21 21:35:05.29405014 +0000 UTC m=+1597.516228620" observedRunningTime="2026-01-21 21:35:05.853548486 +0000 UTC m=+1598.075726966" watchObservedRunningTime="2026-01-21 21:35:05.863126692 +0000 UTC m=+1598.085305162" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.332994 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.465572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.465703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.465731 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.465852 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.465883 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.466063 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.466128 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbp7l\" (UniqueName: \"kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.466153 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data\") pod \"142126d0-df41-40a8-abe2-00359a595e88\" (UID: \"142126d0-df41-40a8-abe2-00359a595e88\") " Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.466911 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.467216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.481019 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts" (OuterVolumeSpecName: "scripts") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.481158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l" (OuterVolumeSpecName: "kube-api-access-lbp7l") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "kube-api-access-lbp7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.499044 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.556610 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.564594 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568711 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbp7l\" (UniqueName: \"kubernetes.io/projected/142126d0-df41-40a8-abe2-00359a595e88-kube-api-access-lbp7l\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568752 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568772 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568790 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568816 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568829 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/142126d0-df41-40a8-abe2-00359a595e88-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.568844 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.661997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data" (OuterVolumeSpecName: "config-data") pod "142126d0-df41-40a8-abe2-00359a595e88" (UID: "142126d0-df41-40a8-abe2-00359a595e88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.670383 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142126d0-df41-40a8-abe2-00359a595e88-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.857117 4860 generic.go:334] "Generic (PLEG): container finished" podID="142126d0-df41-40a8-abe2-00359a595e88" containerID="7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48" exitCode=0 Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.857179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerDied","Data":"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48"} Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.857295 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.858270 4860 scope.go:117] "RemoveContainer" containerID="31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.858225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"142126d0-df41-40a8-abe2-00359a595e88","Type":"ContainerDied","Data":"d87d78fa92c2e272db731932c6bcad0da6d1148c740353c48544adc9b49e1bf6"} Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.894567 4860 scope.go:117] "RemoveContainer" containerID="218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.904204 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.915310 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.924642 4860 scope.go:117] "RemoveContainer" containerID="7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.944819 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:06 crc kubenswrapper[4860]: E0121 21:35:06.945600 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="sg-core" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.945635 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="sg-core" Jan 21 21:35:06 crc kubenswrapper[4860]: E0121 21:35:06.945683 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="proxy-httpd" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.945696 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="proxy-httpd" Jan 21 21:35:06 crc kubenswrapper[4860]: E0121 21:35:06.945708 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-central-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.945719 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-central-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: E0121 21:35:06.945759 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-notification-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.945768 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-notification-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.946170 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-notification-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.946242 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="sg-core" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.946265 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="ceilometer-central-agent" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.946284 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="142126d0-df41-40a8-abe2-00359a595e88" containerName="proxy-httpd" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.949609 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.955161 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.955584 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.955782 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.956124 4860 scope.go:117] "RemoveContainer" containerID="e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.956288 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980535 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980645 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980710 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980766 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980839 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2b9\" (UniqueName: \"kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980858 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.980920 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:06 crc kubenswrapper[4860]: I0121 21:35:06.981038 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.001117 4860 scope.go:117] "RemoveContainer" containerID="31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9" Jan 21 21:35:07 crc kubenswrapper[4860]: E0121 21:35:07.002100 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9\": container with ID starting with 31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9 not found: ID does not exist" containerID="31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.002153 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9"} err="failed to get container status \"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9\": rpc error: code = NotFound desc = could not find container \"31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9\": container with ID starting with 31b3d891600c08bbeec20c6be7474082bd005514a2ae634c5ebb9ece1d1887b9 not found: ID does not exist" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.002187 4860 scope.go:117] "RemoveContainer" containerID="218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1" Jan 21 21:35:07 crc kubenswrapper[4860]: E0121 21:35:07.002503 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1\": container with ID starting with 218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1 not found: ID does not exist" containerID="218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.002538 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1"} err="failed to get container status \"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1\": rpc error: code = NotFound desc = could not find container \"218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1\": container with ID starting with 218626e27699327635c95c80337d662deeec9684be0525a2b551d924aa8e28a1 not found: ID does not exist" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.002555 4860 scope.go:117] "RemoveContainer" containerID="7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48" Jan 21 21:35:07 crc kubenswrapper[4860]: E0121 21:35:07.002929 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48\": container with ID starting with 7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48 not found: ID does not exist" containerID="7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.003021 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48"} err="failed to get container status \"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48\": rpc error: code = NotFound desc = could not find container \"7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48\": container with ID starting with 7936f2e471d9459b81f8f62952f29d7863e08b0925fef678f13cfa45229b1d48 not found: ID does not exist" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.003055 4860 scope.go:117] "RemoveContainer" containerID="e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9" Jan 21 21:35:07 crc kubenswrapper[4860]: E0121 21:35:07.003389 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9\": container with ID starting with e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9 not found: ID does not exist" containerID="e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.003409 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9"} err="failed to get container status \"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9\": rpc error: code = NotFound desc = could not find container \"e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9\": container with ID starting with e1500e9288df42e6448919ac241e26ba724ff02e503e252c0567a6b0fdad4bc9 not found: ID does not exist" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.082768 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.082857 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2b9\" (UniqueName: \"kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.082888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.082924 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.082981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.083045 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.083106 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.083145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.084105 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.084696 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.089965 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.096841 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.102470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.102551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.102776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.106997 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2b9\" (UniqueName: \"kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9\") pod \"ceilometer-0\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.272660 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.817630 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:35:07 crc kubenswrapper[4860]: I0121 21:35:07.874624 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerStarted","Data":"f61e1912fe03e479ea515523c32b3ec4227d3ec5f3066c75d73db565376cb9be"} Jan 21 21:35:08 crc kubenswrapper[4860]: I0121 21:35:08.590310 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142126d0-df41-40a8-abe2-00359a595e88" path="/var/lib/kubelet/pods/142126d0-df41-40a8-abe2-00359a595e88/volumes" Jan 21 21:35:08 crc kubenswrapper[4860]: I0121 21:35:08.895948 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerStarted","Data":"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f"} Jan 21 21:35:10 crc kubenswrapper[4860]: I0121 21:35:10.920075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerStarted","Data":"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204"} Jan 21 21:35:11 crc kubenswrapper[4860]: I0121 21:35:11.183368 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:11 crc kubenswrapper[4860]: I0121 21:35:11.183477 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:11 crc kubenswrapper[4860]: I0121 21:35:11.250521 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:11 crc kubenswrapper[4860]: I0121 21:35:11.934708 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerStarted","Data":"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c"} Jan 21 21:35:11 crc kubenswrapper[4860]: I0121 21:35:11.998138 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:12 crc kubenswrapper[4860]: I0121 21:35:12.950355 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerStarted","Data":"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685"} Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.403237 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.719375592 podStartE2EDuration="7.403196559s" podCreationTimestamp="2026-01-21 21:35:06 +0000 UTC" firstStartedPulling="2026-01-21 21:35:07.842205444 +0000 UTC m=+1600.064383914" lastFinishedPulling="2026-01-21 21:35:12.526026411 +0000 UTC m=+1604.748204881" observedRunningTime="2026-01-21 21:35:13.399185416 +0000 UTC m=+1605.621363896" watchObservedRunningTime="2026-01-21 21:35:13.403196559 +0000 UTC m=+1605.625375029" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.430814 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.433168 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.454610 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.538570 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.538716 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.538977 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbjw\" (UniqueName: \"kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.640633 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.641174 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pbjw\" (UniqueName: \"kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.641212 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.641651 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.641740 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.668082 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pbjw\" (UniqueName: \"kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw\") pod \"redhat-marketplace-tgtmr\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.774243 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:13 crc kubenswrapper[4860]: I0121 21:35:13.965248 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:14 crc kubenswrapper[4860]: I0121 21:35:14.353182 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:14 crc kubenswrapper[4860]: I0121 21:35:14.975117 4860 generic.go:334] "Generic (PLEG): container finished" podID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerID="95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37" exitCode=0 Jan 21 21:35:14 crc kubenswrapper[4860]: I0121 21:35:14.975235 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerDied","Data":"95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37"} Jan 21 21:35:14 crc kubenswrapper[4860]: I0121 21:35:14.975668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerStarted","Data":"439bfd2a1897abe45b55e2fe83df08ba5e887522d303463553fd686b48c8e2a3"} Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.015556 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.022556 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4ppf" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="registry-server" containerID="cri-o://c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74" gracePeriod=2 Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.580085 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:35:16 crc kubenswrapper[4860]: E0121 21:35:16.580834 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.655662 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.827919 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content\") pod \"42d351da-f3e3-49a9-839f-03e2d927dc92\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.828070 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities\") pod \"42d351da-f3e3-49a9-839f-03e2d927dc92\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.828140 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gfg9\" (UniqueName: \"kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9\") pod \"42d351da-f3e3-49a9-839f-03e2d927dc92\" (UID: \"42d351da-f3e3-49a9-839f-03e2d927dc92\") " Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.829859 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities" (OuterVolumeSpecName: "utilities") pod "42d351da-f3e3-49a9-839f-03e2d927dc92" (UID: "42d351da-f3e3-49a9-839f-03e2d927dc92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.856321 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9" (OuterVolumeSpecName: "kube-api-access-5gfg9") pod "42d351da-f3e3-49a9-839f-03e2d927dc92" (UID: "42d351da-f3e3-49a9-839f-03e2d927dc92"). InnerVolumeSpecName "kube-api-access-5gfg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.905661 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42d351da-f3e3-49a9-839f-03e2d927dc92" (UID: "42d351da-f3e3-49a9-839f-03e2d927dc92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.931761 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.931797 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d351da-f3e3-49a9-839f-03e2d927dc92-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:16 crc kubenswrapper[4860]: I0121 21:35:16.931808 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gfg9\" (UniqueName: \"kubernetes.io/projected/42d351da-f3e3-49a9-839f-03e2d927dc92-kube-api-access-5gfg9\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.003574 4860 generic.go:334] "Generic (PLEG): container finished" podID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerID="c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74" exitCode=0 Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.004236 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerDied","Data":"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74"} Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.004295 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4ppf" event={"ID":"42d351da-f3e3-49a9-839f-03e2d927dc92","Type":"ContainerDied","Data":"13d584cef3506b4d9b7d66ac34f18ce2cc1cdca2a295b4ca08f2c03d68a99c5e"} Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.004326 4860 scope.go:117] "RemoveContainer" containerID="c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.004537 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4ppf" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.021312 4860 generic.go:334] "Generic (PLEG): container finished" podID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerID="b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92" exitCode=0 Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.021397 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerDied","Data":"b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92"} Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.071403 4860 scope.go:117] "RemoveContainer" containerID="da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.093195 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.101501 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4ppf"] Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.105321 4860 scope.go:117] "RemoveContainer" containerID="78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.140874 4860 scope.go:117] "RemoveContainer" containerID="c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74" Jan 21 21:35:17 crc kubenswrapper[4860]: E0121 21:35:17.149026 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74\": container with ID starting with c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74 not found: ID does not exist" containerID="c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.149099 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74"} err="failed to get container status \"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74\": rpc error: code = NotFound desc = could not find container \"c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74\": container with ID starting with c56678c8bbb467489693d8ffd2b7f3d35158867e3ce488f1ad4ab35d0ecf1b74 not found: ID does not exist" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.149132 4860 scope.go:117] "RemoveContainer" containerID="da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511" Jan 21 21:35:17 crc kubenswrapper[4860]: E0121 21:35:17.150957 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511\": container with ID starting with da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511 not found: ID does not exist" containerID="da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.150987 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511"} err="failed to get container status \"da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511\": rpc error: code = NotFound desc = could not find container \"da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511\": container with ID starting with da11788de778c4564c8e3f9eda7d9f4f680f06f69b5e1a68c529f370e98ec511 not found: ID does not exist" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.151009 4860 scope.go:117] "RemoveContainer" containerID="78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66" Jan 21 21:35:17 crc kubenswrapper[4860]: E0121 21:35:17.151464 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66\": container with ID starting with 78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66 not found: ID does not exist" containerID="78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66" Jan 21 21:35:17 crc kubenswrapper[4860]: I0121 21:35:17.151488 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66"} err="failed to get container status \"78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66\": rpc error: code = NotFound desc = could not find container \"78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66\": container with ID starting with 78a14290218419e07971436626c345f5ebe37f30c30cef02fe64954e4f725b66 not found: ID does not exist" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.043805 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerStarted","Data":"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc"} Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.087774 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgtmr" podStartSLOduration=2.25259999 podStartE2EDuration="5.087735739s" podCreationTimestamp="2026-01-21 21:35:13 +0000 UTC" firstStartedPulling="2026-01-21 21:35:14.977636805 +0000 UTC m=+1607.199815275" lastFinishedPulling="2026-01-21 21:35:17.812772544 +0000 UTC m=+1610.034951024" observedRunningTime="2026-01-21 21:35:18.075658946 +0000 UTC m=+1610.297837436" watchObservedRunningTime="2026-01-21 21:35:18.087735739 +0000 UTC m=+1610.309914209" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.358919 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.359370 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="d0d62723-f343-4381-980d-b1600505269a" containerName="watcher-decision-engine" containerID="cri-o://58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" gracePeriod=30 Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.378894 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.379321 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" containerName="memcached" containerID="cri-o://2f650d5d3612430dfd43d6115a2d8e7645b9260515dd6ad2a51ea8d741fd7530" gracePeriod=30 Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.510006 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.510369 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" containerName="watcher-applier" containerID="cri-o://57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" gracePeriod=30 Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.544240 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.544606 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-kuttl-api-log" containerID="cri-o://9b5672234289613d834d093a5dce726f01817a402849d5391d89277beb66ca27" gracePeriod=30 Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.544683 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-api" containerID="cri-o://a9b10c3385524f9284fe55269a3ab06859dec5ee860896e057ac2a550bc24562" gracePeriod=30 Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.592398 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" path="/var/lib/kubelet/pods/42d351da-f3e3-49a9-839f-03e2d927dc92/volumes" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.706640 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-d868p"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.717238 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-d868p"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.885503 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-bpl6z"] Jan 21 21:35:18 crc kubenswrapper[4860]: E0121 21:35:18.886079 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="registry-server" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.886107 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="registry-server" Jan 21 21:35:18 crc kubenswrapper[4860]: E0121 21:35:18.886136 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="extract-content" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.886148 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="extract-content" Jan 21 21:35:18 crc kubenswrapper[4860]: E0121 21:35:18.886166 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="extract-utilities" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.886173 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="extract-utilities" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.886359 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d351da-f3e3-49a9-839f-03e2d927dc92" containerName="registry-server" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.887090 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.892303 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.898215 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.931570 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-bpl6z"] Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.983804 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.983886 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.983987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.984019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlcj\" (UniqueName: \"kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.984040 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.984070 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:18 crc kubenswrapper[4860]: I0121 21:35:18.984098 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.057922 4860 generic.go:334] "Generic (PLEG): container finished" podID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerID="9b5672234289613d834d093a5dce726f01817a402849d5391d89277beb66ca27" exitCode=143 Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.057993 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerDied","Data":"9b5672234289613d834d093a5dce726f01817a402849d5391d89277beb66ca27"} Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.086017 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.086197 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.086252 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.086784 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.086951 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlcj\" (UniqueName: \"kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.087458 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.087602 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.096571 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.096751 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.096806 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.097075 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.098480 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.099366 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.132050 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlcj\" (UniqueName: \"kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj\") pod \"keystone-bootstrap-bpl6z\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.142444 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.145044 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.147531 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.147753 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" containerName="watcher-applier" Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.200667 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.203706 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.205573 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 21:35:19 crc kubenswrapper[4860]: E0121 21:35:19.205703 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="d0d62723-f343-4381-980d-b1600505269a" containerName="watcher-decision-engine" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.225590 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:19 crc kubenswrapper[4860]: I0121 21:35:19.864445 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-bpl6z"] Jan 21 21:35:20 crc kubenswrapper[4860]: I0121 21:35:20.072830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" event={"ID":"9a636e47-103a-4fb0-9cdd-567e47cae4c1","Type":"ContainerStarted","Data":"dd57bb584ae868be3e8d11b455be5408c70df4a27042a32a202f711925430af7"} Jan 21 21:35:20 crc kubenswrapper[4860]: I0121 21:35:20.595855 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec5cc83-a75e-49ca-963c-423f2d6af9c1" path="/var/lib/kubelet/pods/0ec5cc83-a75e-49ca-963c-423f2d6af9c1/volumes" Jan 21 21:35:20 crc kubenswrapper[4860]: I0121 21:35:20.695146 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.153:9322/\": read tcp 10.217.0.2:44596->10.217.0.153:9322: read: connection reset by peer" Jan 21 21:35:20 crc kubenswrapper[4860]: I0121 21:35:20.695215 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.153:9322/\": read tcp 10.217.0.2:44612->10.217.0.153:9322: read: connection reset by peer" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.084415 4860 generic.go:334] "Generic (PLEG): container finished" podID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerID="a9b10c3385524f9284fe55269a3ab06859dec5ee860896e057ac2a550bc24562" exitCode=0 Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.084500 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerDied","Data":"a9b10c3385524f9284fe55269a3ab06859dec5ee860896e057ac2a550bc24562"} Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.085892 4860 generic.go:334] "Generic (PLEG): container finished" podID="c1817e64-9ce0-4542-a32b-da4c6dd08267" containerID="2f650d5d3612430dfd43d6115a2d8e7645b9260515dd6ad2a51ea8d741fd7530" exitCode=0 Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.085949 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"c1817e64-9ce0-4542-a32b-da4c6dd08267","Type":"ContainerDied","Data":"2f650d5d3612430dfd43d6115a2d8e7645b9260515dd6ad2a51ea8d741fd7530"} Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.087103 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" event={"ID":"9a636e47-103a-4fb0-9cdd-567e47cae4c1","Type":"ContainerStarted","Data":"65d660c1bbb467d539a9e30d9ad0e3a8746e6a20c96620d0964fcec9a0959484"} Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.114188 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" podStartSLOduration=3.114162951 podStartE2EDuration="3.114162951s" podCreationTimestamp="2026-01-21 21:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:21.106117833 +0000 UTC m=+1613.328296333" watchObservedRunningTime="2026-01-21 21:35:21.114162951 +0000 UTC m=+1613.336341421" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.242861 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.357836 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.357906 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.357945 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.357993 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsp5f\" (UniqueName: \"kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.358460 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.358549 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.358660 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca\") pod \"0bd5543d-fe4d-4440-b911-0832adcc8eef\" (UID: \"0bd5543d-fe4d-4440-b911-0832adcc8eef\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.359389 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs" (OuterVolumeSpecName: "logs") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.359849 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd5543d-fe4d-4440-b911-0832adcc8eef-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.367306 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f" (OuterVolumeSpecName: "kube-api-access-zsp5f") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "kube-api-access-zsp5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.410157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.431418 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.466323 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.466371 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.466382 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsp5f\" (UniqueName: \"kubernetes.io/projected/0bd5543d-fe4d-4440-b911-0832adcc8eef-kube-api-access-zsp5f\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.469225 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.469503 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.498181 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data" (OuterVolumeSpecName: "config-data") pod "0bd5543d-fe4d-4440-b911-0832adcc8eef" (UID: "0bd5543d-fe4d-4440-b911-0832adcc8eef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.498866 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.568275 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.568602 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.568665 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5543d-fe4d-4440-b911-0832adcc8eef-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.717236 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvs42\" (UniqueName: \"kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42\") pod \"c1817e64-9ce0-4542-a32b-da4c6dd08267\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.717415 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data\") pod \"c1817e64-9ce0-4542-a32b-da4c6dd08267\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.717534 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle\") pod \"c1817e64-9ce0-4542-a32b-da4c6dd08267\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.717573 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config\") pod \"c1817e64-9ce0-4542-a32b-da4c6dd08267\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.717622 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs\") pod \"c1817e64-9ce0-4542-a32b-da4c6dd08267\" (UID: \"c1817e64-9ce0-4542-a32b-da4c6dd08267\") " Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.721699 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data" (OuterVolumeSpecName: "config-data") pod "c1817e64-9ce0-4542-a32b-da4c6dd08267" (UID: "c1817e64-9ce0-4542-a32b-da4c6dd08267"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.725957 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c1817e64-9ce0-4542-a32b-da4c6dd08267" (UID: "c1817e64-9ce0-4542-a32b-da4c6dd08267"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.735163 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42" (OuterVolumeSpecName: "kube-api-access-zvs42") pod "c1817e64-9ce0-4542-a32b-da4c6dd08267" (UID: "c1817e64-9ce0-4542-a32b-da4c6dd08267"). InnerVolumeSpecName "kube-api-access-zvs42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.757920 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1817e64-9ce0-4542-a32b-da4c6dd08267" (UID: "c1817e64-9ce0-4542-a32b-da4c6dd08267"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.826253 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c1817e64-9ce0-4542-a32b-da4c6dd08267" (UID: "c1817e64-9ce0-4542-a32b-da4c6dd08267"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.833275 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.833331 4860 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.833345 4860 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1817e64-9ce0-4542-a32b-da4c6dd08267-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.833358 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvs42\" (UniqueName: \"kubernetes.io/projected/c1817e64-9ce0-4542-a32b-da4c6dd08267-kube-api-access-zvs42\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:21 crc kubenswrapper[4860]: I0121 21:35:21.833382 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1817e64-9ce0-4542-a32b-da4c6dd08267-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.127565 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.127701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"0bd5543d-fe4d-4440-b911-0832adcc8eef","Type":"ContainerDied","Data":"5bced3ba1c118b060d071943b0165fb48065d79c417f8d2e61b05b8510155ce8"} Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.127778 4860 scope.go:117] "RemoveContainer" containerID="a9b10c3385524f9284fe55269a3ab06859dec5ee860896e057ac2a550bc24562" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.132469 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"c1817e64-9ce0-4542-a32b-da4c6dd08267","Type":"ContainerDied","Data":"7a40e35cb875291455cc100e166bd8562d1b64021c0acb4cb1d34c6569cf190e"} Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.133196 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.169265 4860 scope.go:117] "RemoveContainer" containerID="9b5672234289613d834d093a5dce726f01817a402849d5391d89277beb66ca27" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.199653 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.217443 4860 scope.go:117] "RemoveContainer" containerID="2f650d5d3612430dfd43d6115a2d8e7645b9260515dd6ad2a51ea8d741fd7530" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.233151 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.255239 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.273122 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.289144 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: E0121 21:35:22.289766 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-kuttl-api-log" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.289792 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-kuttl-api-log" Jan 21 21:35:22 crc kubenswrapper[4860]: E0121 21:35:22.289811 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-api" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.289819 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-api" Jan 21 21:35:22 crc kubenswrapper[4860]: E0121 21:35:22.289835 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" containerName="memcached" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.289842 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" containerName="memcached" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.290037 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-api" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.290064 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" containerName="memcached" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.290084 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" containerName="watcher-kuttl-api-log" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.291363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.296012 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.296306 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.297104 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.306484 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.315608 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.317126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.324491 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-rbkxl" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.324927 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.325203 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.339414 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.449744 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.449953 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450048 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448zq\" (UniqueName: \"kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kolla-config\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450418 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450665 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltt5\" (UniqueName: \"kubernetes.io/projected/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kube-api-access-gltt5\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.450975 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.451018 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.451096 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.451139 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-config-data\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.451251 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.553367 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.553466 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.553541 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.553573 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.555088 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448zq\" (UniqueName: \"kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.555257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kolla-config\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556544 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556600 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltt5\" (UniqueName: \"kubernetes.io/projected/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kube-api-access-gltt5\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556736 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556844 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.556876 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-config-data\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.558914 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.559423 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.559954 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.560335 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kolla-config\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.561594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.562630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.564546 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.565317 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17062c8a-d76d-4565-9ec1-a0a2d83ad784-config-data\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.566987 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.579618 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.580104 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17062c8a-d76d-4565-9ec1-a0a2d83ad784-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.586836 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448zq\" (UniqueName: \"kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq\") pod \"watcher-kuttl-api-0\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.588145 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltt5\" (UniqueName: \"kubernetes.io/projected/17062c8a-d76d-4565-9ec1-a0a2d83ad784-kube-api-access-gltt5\") pod \"memcached-0\" (UID: \"17062c8a-d76d-4565-9ec1-a0a2d83ad784\") " pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.602723 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd5543d-fe4d-4440-b911-0832adcc8eef" path="/var/lib/kubelet/pods/0bd5543d-fe4d-4440-b911-0832adcc8eef/volumes" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.606561 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1817e64-9ce0-4542-a32b-da4c6dd08267" path="/var/lib/kubelet/pods/c1817e64-9ce0-4542-a32b-da4c6dd08267/volumes" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.620516 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:22 crc kubenswrapper[4860]: I0121 21:35:22.647786 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.005257 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.150617 4860 generic.go:334] "Generic (PLEG): container finished" podID="e4ca4965-593d-4341-9e60-fc065881b3de" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" exitCode=0 Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.150674 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e4ca4965-593d-4341-9e60-fc065881b3de","Type":"ContainerDied","Data":"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518"} Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.150706 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e4ca4965-593d-4341-9e60-fc065881b3de","Type":"ContainerDied","Data":"2a9e7f5ac56d60b5130899f83811ad6ec328538554b42f523ace383d21be283b"} Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.150737 4860 scope.go:117] "RemoveContainer" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.150863 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.172075 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data\") pod \"e4ca4965-593d-4341-9e60-fc065881b3de\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.172276 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m6lg\" (UniqueName: \"kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg\") pod \"e4ca4965-593d-4341-9e60-fc065881b3de\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.172398 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle\") pod \"e4ca4965-593d-4341-9e60-fc065881b3de\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.172623 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs\") pod \"e4ca4965-593d-4341-9e60-fc065881b3de\" (UID: \"e4ca4965-593d-4341-9e60-fc065881b3de\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.173147 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs" (OuterVolumeSpecName: "logs") pod "e4ca4965-593d-4341-9e60-fc065881b3de" (UID: "e4ca4965-593d-4341-9e60-fc065881b3de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.180438 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg" (OuterVolumeSpecName: "kube-api-access-5m6lg") pod "e4ca4965-593d-4341-9e60-fc065881b3de" (UID: "e4ca4965-593d-4341-9e60-fc065881b3de"). InnerVolumeSpecName "kube-api-access-5m6lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.208244 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ca4965-593d-4341-9e60-fc065881b3de" (UID: "e4ca4965-593d-4341-9e60-fc065881b3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.211945 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.273710 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.273924 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ca4965-593d-4341-9e60-fc065881b3de-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.273954 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m6lg\" (UniqueName: \"kubernetes.io/projected/e4ca4965-593d-4341-9e60-fc065881b3de-kube-api-access-5m6lg\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.279973 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data" (OuterVolumeSpecName: "config-data") pod "e4ca4965-593d-4341-9e60-fc065881b3de" (UID: "e4ca4965-593d-4341-9e60-fc065881b3de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.307207 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.345432 4860 scope.go:117] "RemoveContainer" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" Jan 21 21:35:23 crc kubenswrapper[4860]: E0121 21:35:23.347412 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518\": container with ID starting with 57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518 not found: ID does not exist" containerID="57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.347508 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518"} err="failed to get container status \"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518\": rpc error: code = NotFound desc = could not find container \"57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518\": container with ID starting with 57737fb90b09a3256f9a55210ad30e9266578448ba6a9f9e69fbe8912b107518 not found: ID does not exist" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.376308 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca4965-593d-4341-9e60-fc065881b3de-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.516517 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.533204 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.547342 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.558565 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: E0121 21:35:23.559279 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d62723-f343-4381-980d-b1600505269a" containerName="watcher-decision-engine" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.559303 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d62723-f343-4381-980d-b1600505269a" containerName="watcher-decision-engine" Jan 21 21:35:23 crc kubenswrapper[4860]: E0121 21:35:23.559331 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" containerName="watcher-applier" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.559341 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" containerName="watcher-applier" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.559601 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" containerName="watcher-applier" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.559633 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d62723-f343-4381-980d-b1600505269a" containerName="watcher-decision-engine" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.562777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.566304 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.585644 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.687052 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rlp\" (UniqueName: \"kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp\") pod \"d0d62723-f343-4381-980d-b1600505269a\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.687284 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data\") pod \"d0d62723-f343-4381-980d-b1600505269a\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.687325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca\") pod \"d0d62723-f343-4381-980d-b1600505269a\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.687436 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs\") pod \"d0d62723-f343-4381-980d-b1600505269a\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.687970 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle\") pod \"d0d62723-f343-4381-980d-b1600505269a\" (UID: \"d0d62723-f343-4381-980d-b1600505269a\") " Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.688080 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs" (OuterVolumeSpecName: "logs") pod "d0d62723-f343-4381-980d-b1600505269a" (UID: "d0d62723-f343-4381-980d-b1600505269a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.688745 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.688815 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.688849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.689164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76d86\" (UniqueName: \"kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.689270 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.689411 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0d62723-f343-4381-980d-b1600505269a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.691872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp" (OuterVolumeSpecName: "kube-api-access-p4rlp") pod "d0d62723-f343-4381-980d-b1600505269a" (UID: "d0d62723-f343-4381-980d-b1600505269a"). InnerVolumeSpecName "kube-api-access-p4rlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.715962 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0d62723-f343-4381-980d-b1600505269a" (UID: "d0d62723-f343-4381-980d-b1600505269a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.734164 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d0d62723-f343-4381-980d-b1600505269a" (UID: "d0d62723-f343-4381-980d-b1600505269a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.750878 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data" (OuterVolumeSpecName: "config-data") pod "d0d62723-f343-4381-980d-b1600505269a" (UID: "d0d62723-f343-4381-980d-b1600505269a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.775063 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.775575 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.790699 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791223 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791394 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76d86\" (UniqueName: \"kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791478 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791607 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791677 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rlp\" (UniqueName: \"kubernetes.io/projected/d0d62723-f343-4381-980d-b1600505269a-kube-api-access-p4rlp\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791689 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791699 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.791711 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d62723-f343-4381-980d-b1600505269a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.792108 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.797249 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.805705 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.805986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.814653 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76d86\" (UniqueName: \"kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86\") pod \"watcher-kuttl-applier-0\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.835861 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:23 crc kubenswrapper[4860]: I0121 21:35:23.916730 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.168084 4860 generic.go:334] "Generic (PLEG): container finished" podID="d0d62723-f343-4381-980d-b1600505269a" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" exitCode=0 Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.168512 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d0d62723-f343-4381-980d-b1600505269a","Type":"ContainerDied","Data":"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.168553 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d0d62723-f343-4381-980d-b1600505269a","Type":"ContainerDied","Data":"90abb0fb92ea72e4f3aa511ce8a44df1de5039e8b6e174de91c805589fbf50a9"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.168577 4860 scope.go:117] "RemoveContainer" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.168734 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.172962 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerStarted","Data":"c24f721b1989b75a6d1fddf6eea8f9e46deb8fb9c7fa41408f7dca7db0992e75"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.173029 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerStarted","Data":"a65b543986d1678d92608580c459e6caeca3d86f3f3e3e6bae65b76fdb51bdf1"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.173042 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerStarted","Data":"fce31cd0f1a60e6c3d9e9467cbb11caf9a7c77417fddfb6289cf20f029aca9b4"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.174687 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.201381 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"17062c8a-d76d-4565-9ec1-a0a2d83ad784","Type":"ContainerStarted","Data":"af4ed940f0c7a5fe09e50f94ba6cd6eccdc3b6473f0f4994c26944d29c695d90"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.201465 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.201478 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"17062c8a-d76d-4565-9ec1-a0a2d83ad784","Type":"ContainerStarted","Data":"51ada049915f50d2ab630c261ed328aedd5cd0f18ab295758f66f3fdaa3b5c95"} Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.202732 4860 scope.go:117] "RemoveContainer" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" Jan 21 21:35:24 crc kubenswrapper[4860]: E0121 21:35:24.211600 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d\": container with ID starting with 58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d not found: ID does not exist" containerID="58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.211671 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d"} err="failed to get container status \"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d\": rpc error: code = NotFound desc = could not find container \"58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d\": container with ID starting with 58f5d5c874c5122c432ee8ac92821216204b1e950b3c78cc5d07faf440930c9d not found: ID does not exist" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.254218 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.254179928 podStartE2EDuration="2.254179928s" podCreationTimestamp="2026-01-21 21:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:24.210968495 +0000 UTC m=+1616.433146975" watchObservedRunningTime="2026-01-21 21:35:24.254179928 +0000 UTC m=+1616.476358398" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.303314 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.303253623 podStartE2EDuration="2.303253623s" podCreationTimestamp="2026-01-21 21:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:24.24873515 +0000 UTC m=+1616.470913620" watchObservedRunningTime="2026-01-21 21:35:24.303253623 +0000 UTC m=+1616.525432103" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.310005 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.342862 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.354070 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.366024 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.368126 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.377024 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.380327 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:35:24 crc kubenswrapper[4860]: W0121 21:35:24.437257 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d0733a_5369_4bee_98b5_44f2d588ccf7.slice/crio-fac690d18965d1f8c0b6a617dffc2541864e4e57731fbddfc044cca198b4f593 WatchSource:0}: Error finding container fac690d18965d1f8c0b6a617dffc2541864e4e57731fbddfc044cca198b4f593: Status 404 returned error can't find the container with id fac690d18965d1f8c0b6a617dffc2541864e4e57731fbddfc044cca198b4f593 Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.442613 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.538760 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.538840 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.538924 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.539070 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7rp\" (UniqueName: \"kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.539157 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.539210 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.601628 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d62723-f343-4381-980d-b1600505269a" path="/var/lib/kubelet/pods/d0d62723-f343-4381-980d-b1600505269a/volumes" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.602644 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ca4965-593d-4341-9e60-fc065881b3de" path="/var/lib/kubelet/pods/e4ca4965-593d-4341-9e60-fc065881b3de/volumes" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.641708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7rp\" (UniqueName: \"kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.641790 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.641817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.641954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.641989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.642062 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.644028 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.651712 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.655960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.658973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.662145 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.684102 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7rp\" (UniqueName: \"kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:24 crc kubenswrapper[4860]: I0121 21:35:24.706169 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:25 crc kubenswrapper[4860]: I0121 21:35:25.266369 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4d0733a-5369-4bee-98b5-44f2d588ccf7","Type":"ContainerStarted","Data":"fac690d18965d1f8c0b6a617dffc2541864e4e57731fbddfc044cca198b4f593"} Jan 21 21:35:25 crc kubenswrapper[4860]: I0121 21:35:25.679309 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:35:25 crc kubenswrapper[4860]: W0121 21:35:25.684687 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068f0a99_9308_4095_b015_9c13638ca80b.slice/crio-ad85e8d22aa8518b992b8e72b2060becb11bed35e3e17830bc6170f10da4e230 WatchSource:0}: Error finding container ad85e8d22aa8518b992b8e72b2060becb11bed35e3e17830bc6170f10da4e230: Status 404 returned error can't find the container with id ad85e8d22aa8518b992b8e72b2060becb11bed35e3e17830bc6170f10da4e230 Jan 21 21:35:25 crc kubenswrapper[4860]: I0121 21:35:25.811479 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.291155 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a636e47-103a-4fb0-9cdd-567e47cae4c1" containerID="65d660c1bbb467d539a9e30d9ad0e3a8746e6a20c96620d0964fcec9a0959484" exitCode=0 Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.291381 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" event={"ID":"9a636e47-103a-4fb0-9cdd-567e47cae4c1","Type":"ContainerDied","Data":"65d660c1bbb467d539a9e30d9ad0e3a8746e6a20c96620d0964fcec9a0959484"} Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.293673 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4d0733a-5369-4bee-98b5-44f2d588ccf7","Type":"ContainerStarted","Data":"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f"} Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.297262 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"068f0a99-9308-4095-b015-9c13638ca80b","Type":"ContainerStarted","Data":"0ecf0f3af20536e566b4f2098aa6e189687e99a9cb36f97c35125bf2a4760b53"} Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.297301 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"068f0a99-9308-4095-b015-9c13638ca80b","Type":"ContainerStarted","Data":"ad85e8d22aa8518b992b8e72b2060becb11bed35e3e17830bc6170f10da4e230"} Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.297354 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.349862 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.349828878 podStartE2EDuration="3.349828878s" podCreationTimestamp="2026-01-21 21:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:26.338674444 +0000 UTC m=+1618.560852924" watchObservedRunningTime="2026-01-21 21:35:26.349828878 +0000 UTC m=+1618.572007338" Jan 21 21:35:26 crc kubenswrapper[4860]: I0121 21:35:26.370038 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.3700087610000002 podStartE2EDuration="2.370008761s" podCreationTimestamp="2026-01-21 21:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:26.366777101 +0000 UTC m=+1618.588955571" watchObservedRunningTime="2026-01-21 21:35:26.370008761 +0000 UTC m=+1618.592187231" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.304580 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tgtmr" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="registry-server" containerID="cri-o://87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc" gracePeriod=2 Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.623339 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.624149 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.803571 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.811733 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.878424 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.878546 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.888020 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.979986 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.980058 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.980149 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.980233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.980301 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlcj\" (UniqueName: \"kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj\") pod \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\" (UID: \"9a636e47-103a-4fb0-9cdd-567e47cae4c1\") " Jan 21 21:35:27 crc kubenswrapper[4860]: I0121 21:35:27.980823 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.005314 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj" (OuterVolumeSpecName: "kube-api-access-9xlcj") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "kube-api-access-9xlcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.030630 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts" (OuterVolumeSpecName: "scripts") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.031563 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.034392 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data" (OuterVolumeSpecName: "config-data") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.054316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.054383 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9a636e47-103a-4fb0-9cdd-567e47cae4c1" (UID: "9a636e47-103a-4fb0-9cdd-567e47cae4c1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088712 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088743 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088753 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088763 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088772 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlcj\" (UniqueName: \"kubernetes.io/projected/9a636e47-103a-4fb0-9cdd-567e47cae4c1-kube-api-access-9xlcj\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088783 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a636e47-103a-4fb0-9cdd-567e47cae4c1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.088945 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.292559 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content\") pod \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.293273 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities\") pod \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.293356 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pbjw\" (UniqueName: \"kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw\") pod \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\" (UID: \"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2\") " Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.294510 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities" (OuterVolumeSpecName: "utilities") pod "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" (UID: "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.297451 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw" (OuterVolumeSpecName: "kube-api-access-9pbjw") pod "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" (UID: "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2"). InnerVolumeSpecName "kube-api-access-9pbjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.319513 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" event={"ID":"9a636e47-103a-4fb0-9cdd-567e47cae4c1","Type":"ContainerDied","Data":"dd57bb584ae868be3e8d11b455be5408c70df4a27042a32a202f711925430af7"} Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.321033 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd57bb584ae868be3e8d11b455be5408c70df4a27042a32a202f711925430af7" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.319576 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-bpl6z" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.321208 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" (UID: "65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.322786 4860 generic.go:334] "Generic (PLEG): container finished" podID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerID="87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc" exitCode=0 Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.322853 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtmr" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.322924 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerDied","Data":"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc"} Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.323049 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtmr" event={"ID":"65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2","Type":"ContainerDied","Data":"439bfd2a1897abe45b55e2fe83df08ba5e887522d303463553fd686b48c8e2a3"} Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.323076 4860 scope.go:117] "RemoveContainer" containerID="87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.353874 4860 scope.go:117] "RemoveContainer" containerID="b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.387139 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.389575 4860 scope.go:117] "RemoveContainer" containerID="95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.397479 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.397518 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pbjw\" (UniqueName: \"kubernetes.io/projected/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-kube-api-access-9pbjw\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.397529 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.401482 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtmr"] Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.425008 4860 scope.go:117] "RemoveContainer" containerID="87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc" Jan 21 21:35:28 crc kubenswrapper[4860]: E0121 21:35:28.430154 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc\": container with ID starting with 87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc not found: ID does not exist" containerID="87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.430336 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc"} err="failed to get container status \"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc\": rpc error: code = NotFound desc = could not find container \"87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc\": container with ID starting with 87483f6dc9561a7d9a8553a2f27ef8eacbc074798f258449143c180ab5f6c1dc not found: ID does not exist" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.430460 4860 scope.go:117] "RemoveContainer" containerID="b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92" Jan 21 21:35:28 crc kubenswrapper[4860]: E0121 21:35:28.431333 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92\": container with ID starting with b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92 not found: ID does not exist" containerID="b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.431404 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92"} err="failed to get container status \"b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92\": rpc error: code = NotFound desc = could not find container \"b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92\": container with ID starting with b9472a2ded32cbf920947c9cc45cf78715a20d847af86eb44f858a0e9601fe92 not found: ID does not exist" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.431440 4860 scope.go:117] "RemoveContainer" containerID="95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37" Jan 21 21:35:28 crc kubenswrapper[4860]: E0121 21:35:28.432143 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37\": container with ID starting with 95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37 not found: ID does not exist" containerID="95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.432227 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37"} err="failed to get container status \"95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37\": rpc error: code = NotFound desc = could not find container \"95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37\": container with ID starting with 95da0f5a6082b5288b7d6004f99933275291cd67314c4c613cb43d13ff72bf37 not found: ID does not exist" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.600321 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" path="/var/lib/kubelet/pods/65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2/volumes" Jan 21 21:35:28 crc kubenswrapper[4860]: I0121 21:35:28.917667 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:30 crc kubenswrapper[4860]: E0121 21:35:30.418122 4860 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:60822->38.102.83.227:38857: write tcp 38.102.83.227:60822->38.102.83.227:38857: write: broken pipe Jan 21 21:35:30 crc kubenswrapper[4860]: I0121 21:35:30.579594 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:35:30 crc kubenswrapper[4860]: E0121 21:35:30.580059 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.622419 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.631796 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.649190 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.844682 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-655bfffb94-t7n44"] Jan 21 21:35:32 crc kubenswrapper[4860]: E0121 21:35:32.845446 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="extract-content" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845476 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="extract-content" Jan 21 21:35:32 crc kubenswrapper[4860]: E0121 21:35:32.845498 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="registry-server" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845508 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="registry-server" Jan 21 21:35:32 crc kubenswrapper[4860]: E0121 21:35:32.845537 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a636e47-103a-4fb0-9cdd-567e47cae4c1" containerName="keystone-bootstrap" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845551 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a636e47-103a-4fb0-9cdd-567e47cae4c1" containerName="keystone-bootstrap" Jan 21 21:35:32 crc kubenswrapper[4860]: E0121 21:35:32.845584 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="extract-utilities" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845595 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="extract-utilities" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845831 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a636e47-103a-4fb0-9cdd-567e47cae4c1" containerName="keystone-bootstrap" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.845865 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ed08a6-32b0-41ce-b92d-ae37e6c2a6d2" containerName="registry-server" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.846870 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.870296 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-655bfffb94-t7n44"] Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.994207 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-credential-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.994320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-public-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.994787 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-fernet-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.994902 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-internal-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.995321 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-scripts\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.995412 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsbsk\" (UniqueName: \"kubernetes.io/projected/75c67306-751f-46ae-8511-b77f1babd94c-kube-api-access-zsbsk\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.995546 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-combined-ca-bundle\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.995618 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-config-data\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:32 crc kubenswrapper[4860]: I0121 21:35:32.995704 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-cert-memcached-mtls\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-public-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098159 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-fernet-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-internal-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-scripts\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098337 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsbsk\" (UniqueName: \"kubernetes.io/projected/75c67306-751f-46ae-8511-b77f1babd94c-kube-api-access-zsbsk\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098379 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-combined-ca-bundle\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098404 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-config-data\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098432 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-cert-memcached-mtls\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.098471 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-credential-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.106822 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-cert-memcached-mtls\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.106908 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-internal-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.107028 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-combined-ca-bundle\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.107281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-public-tls-certs\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.107519 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-scripts\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.111783 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-fernet-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.115777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-config-data\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.116196 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75c67306-751f-46ae-8511-b77f1babd94c-credential-keys\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.127666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsbsk\" (UniqueName: \"kubernetes.io/projected/75c67306-751f-46ae-8511-b77f1babd94c-kube-api-access-zsbsk\") pod \"keystone-655bfffb94-t7n44\" (UID: \"75c67306-751f-46ae-8511-b77f1babd94c\") " pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.165835 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.412258 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.587733 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.678235 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-655bfffb94-t7n44"] Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.917163 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:33 crc kubenswrapper[4860]: I0121 21:35:33.968682 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.395593 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" event={"ID":"75c67306-751f-46ae-8511-b77f1babd94c","Type":"ContainerStarted","Data":"2557b2c3557825bf72711869fd747ffaefb8cad9f6ea2d47b0d651f7ad242fa4"} Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.396176 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" event={"ID":"75c67306-751f-46ae-8511-b77f1babd94c","Type":"ContainerStarted","Data":"1735543edfaee56315108f71505b5cba4b5f0c7eb470f80f6413db4d99d0c693"} Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.449389 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.487642 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" podStartSLOduration=2.487615661 podStartE2EDuration="2.487615661s" podCreationTimestamp="2026-01-21 21:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:34.434498442 +0000 UTC m=+1626.656676922" watchObservedRunningTime="2026-01-21 21:35:34.487615661 +0000 UTC m=+1626.709794131" Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.708271 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:34 crc kubenswrapper[4860]: I0121 21:35:34.747460 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:35 crc kubenswrapper[4860]: I0121 21:35:35.405713 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:35:35 crc kubenswrapper[4860]: I0121 21:35:35.406709 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-kuttl-api-log" containerID="cri-o://a65b543986d1678d92608580c459e6caeca3d86f3f3e3e6bae65b76fdb51bdf1" gracePeriod=30 Jan 21 21:35:35 crc kubenswrapper[4860]: I0121 21:35:35.407192 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:35 crc kubenswrapper[4860]: I0121 21:35:35.407308 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-api" containerID="cri-o://c24f721b1989b75a6d1fddf6eea8f9e46deb8fb9c7fa41408f7dca7db0992e75" gracePeriod=30 Jan 21 21:35:35 crc kubenswrapper[4860]: I0121 21:35:35.448818 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.442470 4860 generic.go:334] "Generic (PLEG): container finished" podID="d5212be1-2224-44a6-a24f-bdc146578181" containerID="c24f721b1989b75a6d1fddf6eea8f9e46deb8fb9c7fa41408f7dca7db0992e75" exitCode=0 Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.442986 4860 generic.go:334] "Generic (PLEG): container finished" podID="d5212be1-2224-44a6-a24f-bdc146578181" containerID="a65b543986d1678d92608580c459e6caeca3d86f3f3e3e6bae65b76fdb51bdf1" exitCode=143 Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.443333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerDied","Data":"c24f721b1989b75a6d1fddf6eea8f9e46deb8fb9c7fa41408f7dca7db0992e75"} Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.443446 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerDied","Data":"a65b543986d1678d92608580c459e6caeca3d86f3f3e3e6bae65b76fdb51bdf1"} Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.761869 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902521 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902615 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902711 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902770 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902790 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902907 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-448zq\" (UniqueName: \"kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.902973 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.903008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs\") pod \"d5212be1-2224-44a6-a24f-bdc146578181\" (UID: \"d5212be1-2224-44a6-a24f-bdc146578181\") " Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.903422 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs" (OuterVolumeSpecName: "logs") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:35:36 crc kubenswrapper[4860]: I0121 21:35:36.936895 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq" (OuterVolumeSpecName: "kube-api-access-448zq") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "kube-api-access-448zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.001510 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.006454 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data" (OuterVolumeSpecName: "config-data") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.006890 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-448zq\" (UniqueName: \"kubernetes.io/projected/d5212be1-2224-44a6-a24f-bdc146578181-kube-api-access-448zq\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.006917 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.006959 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.006969 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5212be1-2224-44a6-a24f-bdc146578181-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.008347 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.013748 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.017072 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.029190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d5212be1-2224-44a6-a24f-bdc146578181" (UID: "d5212be1-2224-44a6-a24f-bdc146578181"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.108493 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.108848 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.109008 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.109099 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5212be1-2224-44a6-a24f-bdc146578181-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.286745 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.460656 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.460884 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d5212be1-2224-44a6-a24f-bdc146578181","Type":"ContainerDied","Data":"fce31cd0f1a60e6c3d9e9467cbb11caf9a7c77417fddfb6289cf20f029aca9b4"} Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.460995 4860 scope.go:117] "RemoveContainer" containerID="c24f721b1989b75a6d1fddf6eea8f9e46deb8fb9c7fa41408f7dca7db0992e75" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.491506 4860 scope.go:117] "RemoveContainer" containerID="a65b543986d1678d92608580c459e6caeca3d86f3f3e3e6bae65b76fdb51bdf1" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.520966 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.543968 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.569473 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:37 crc kubenswrapper[4860]: E0121 21:35:37.570095 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-api" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.570116 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-api" Jan 21 21:35:37 crc kubenswrapper[4860]: E0121 21:35:37.570141 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-kuttl-api-log" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.570149 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-kuttl-api-log" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.570321 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-kuttl-api-log" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.570348 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5212be1-2224-44a6-a24f-bdc146578181" containerName="watcher-api" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.571674 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.575849 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.584326 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.731379 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.731822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.731904 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.731974 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.731997 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tbl\" (UniqueName: \"kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.732133 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.834735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.834831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.834896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.834979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.835048 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.835079 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69tbl\" (UniqueName: \"kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.836151 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.840339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.840494 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.841254 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.854463 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.862729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69tbl\" (UniqueName: \"kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl\") pod \"watcher-kuttl-api-0\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:37 crc kubenswrapper[4860]: I0121 21:35:37.916020 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:38 crc kubenswrapper[4860]: I0121 21:35:38.268822 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:35:38 crc kubenswrapper[4860]: I0121 21:35:38.597405 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5212be1-2224-44a6-a24f-bdc146578181" path="/var/lib/kubelet/pods/d5212be1-2224-44a6-a24f-bdc146578181/volumes" Jan 21 21:35:38 crc kubenswrapper[4860]: I0121 21:35:38.599459 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerStarted","Data":"5e9b5fa13804e80ac0a3c024185f3e92334f0999be633c49d194979b3dcf0df8"} Jan 21 21:35:39 crc kubenswrapper[4860]: I0121 21:35:39.617599 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerStarted","Data":"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff"} Jan 21 21:35:39 crc kubenswrapper[4860]: I0121 21:35:39.618361 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerStarted","Data":"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261"} Jan 21 21:35:39 crc kubenswrapper[4860]: I0121 21:35:39.621316 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:39 crc kubenswrapper[4860]: I0121 21:35:39.651010 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.650965496 podStartE2EDuration="2.650965496s" podCreationTimestamp="2026-01-21 21:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:35:39.643851366 +0000 UTC m=+1631.866029856" watchObservedRunningTime="2026-01-21 21:35:39.650965496 +0000 UTC m=+1631.873143986" Jan 21 21:35:41 crc kubenswrapper[4860]: I0121 21:35:41.579444 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:35:41 crc kubenswrapper[4860]: E0121 21:35:41.580153 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:35:41 crc kubenswrapper[4860]: I0121 21:35:41.634846 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:35:42 crc kubenswrapper[4860]: I0121 21:35:42.010857 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:42 crc kubenswrapper[4860]: I0121 21:35:42.916689 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:47 crc kubenswrapper[4860]: I0121 21:35:47.916617 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:47 crc kubenswrapper[4860]: I0121 21:35:47.921445 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:48 crc kubenswrapper[4860]: I0121 21:35:48.744549 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:35:52 crc kubenswrapper[4860]: I0121 21:35:52.578763 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:35:52 crc kubenswrapper[4860]: E0121 21:35:52.579362 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:03 crc kubenswrapper[4860]: I0121 21:36:03.581568 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:36:03 crc kubenswrapper[4860]: E0121 21:36:03.587380 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:05 crc kubenswrapper[4860]: I0121 21:36:05.025487 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-655bfffb94-t7n44" Jan 21 21:36:05 crc kubenswrapper[4860]: I0121 21:36:05.121066 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:36:05 crc kubenswrapper[4860]: I0121 21:36:05.121408 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" podUID="da6edf2d-041a-4469-a456-cae342270655" containerName="keystone-api" containerID="cri-o://55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf" gracePeriod=30 Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.863790 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.970052 4860 generic.go:334] "Generic (PLEG): container finished" podID="da6edf2d-041a-4469-a456-cae342270655" containerID="55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf" exitCode=0 Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.970578 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.970581 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" event={"ID":"da6edf2d-041a-4469-a456-cae342270655","Type":"ContainerDied","Data":"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf"} Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.971464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-85df5fbd4-9gdg7" event={"ID":"da6edf2d-041a-4469-a456-cae342270655","Type":"ContainerDied","Data":"e3a911bc39d9cfde74e0c33d42e4a48a95e16be60eb5fcd2a3c9437023203786"} Jan 21 21:36:08 crc kubenswrapper[4860]: I0121 21:36:08.971760 4860 scope.go:117] "RemoveContainer" containerID="55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.005253 4860 scope.go:117] "RemoveContainer" containerID="55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf" Jan 21 21:36:09 crc kubenswrapper[4860]: E0121 21:36:09.005981 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf\": container with ID starting with 55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf not found: ID does not exist" containerID="55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.006061 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf"} err="failed to get container status \"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf\": rpc error: code = NotFound desc = could not find container \"55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf\": container with ID starting with 55b0724c2c0d35e4784ac9b9a912ba53d239ddcf980a5dc01bb02add3568faaf not found: ID does not exist" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063024 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063200 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbvrf\" (UniqueName: \"kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063261 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063306 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063329 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063369 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063443 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.063477 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts\") pod \"da6edf2d-041a-4469-a456-cae342270655\" (UID: \"da6edf2d-041a-4469-a456-cae342270655\") " Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.074148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.074313 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf" (OuterVolumeSpecName: "kube-api-access-kbvrf") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "kube-api-access-kbvrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.078187 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts" (OuterVolumeSpecName: "scripts") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.084586 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.096993 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data" (OuterVolumeSpecName: "config-data") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.099498 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.129678 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.131377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da6edf2d-041a-4469-a456-cae342270655" (UID: "da6edf2d-041a-4469-a456-cae342270655"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166477 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166545 4860 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166564 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbvrf\" (UniqueName: \"kubernetes.io/projected/da6edf2d-041a-4469-a456-cae342270655-kube-api-access-kbvrf\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166576 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166590 4860 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166605 4860 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166617 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.166628 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6edf2d-041a-4469-a456-cae342270655-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.309320 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:36:09 crc kubenswrapper[4860]: I0121 21:36:09.317414 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-85df5fbd4-9gdg7"] Jan 21 21:36:10 crc kubenswrapper[4860]: I0121 21:36:10.594045 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6edf2d-041a-4469-a456-cae342270655" path="/var/lib/kubelet/pods/da6edf2d-041a-4469-a456-cae342270655/volumes" Jan 21 21:36:11 crc kubenswrapper[4860]: I0121 21:36:11.241917 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:11 crc kubenswrapper[4860]: I0121 21:36:11.246142 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-central-agent" containerID="cri-o://1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f" gracePeriod=30 Jan 21 21:36:11 crc kubenswrapper[4860]: I0121 21:36:11.246193 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="sg-core" containerID="cri-o://ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c" gracePeriod=30 Jan 21 21:36:11 crc kubenswrapper[4860]: I0121 21:36:11.246276 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="proxy-httpd" containerID="cri-o://2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685" gracePeriod=30 Jan 21 21:36:11 crc kubenswrapper[4860]: I0121 21:36:11.246322 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-notification-agent" containerID="cri-o://960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204" gracePeriod=30 Jan 21 21:36:11 crc kubenswrapper[4860]: E0121 21:36:11.419945 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd259d8e_c8c3_408f_bca2_2c5a21a06266.slice/crio-ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd259d8e_c8c3_408f_bca2_2c5a21a06266.slice/crio-conmon-ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.009373 4860 generic.go:334] "Generic (PLEG): container finished" podID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerID="2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685" exitCode=0 Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.009871 4860 generic.go:334] "Generic (PLEG): container finished" podID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerID="ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c" exitCode=2 Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.009900 4860 generic.go:334] "Generic (PLEG): container finished" podID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerID="1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f" exitCode=0 Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.009464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerDied","Data":"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685"} Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.010019 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerDied","Data":"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c"} Jan 21 21:36:12 crc kubenswrapper[4860]: I0121 21:36:12.010041 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerDied","Data":"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f"} Jan 21 21:36:14 crc kubenswrapper[4860]: I0121 21:36:14.579704 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:36:14 crc kubenswrapper[4860]: E0121 21:36:14.580410 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.726776 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889095 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889197 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889249 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z2b9\" (UniqueName: \"kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889307 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889488 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889509 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889550 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889583 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs\") pod \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\" (UID: \"bd259d8e-c8c3-408f-bca2-2c5a21a06266\") " Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.889701 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.890903 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.891125 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.897030 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts" (OuterVolumeSpecName: "scripts") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.911100 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9" (OuterVolumeSpecName: "kube-api-access-2z2b9") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "kube-api-access-2z2b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.917396 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.943253 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.985743 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data" (OuterVolumeSpecName: "config-data") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993316 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd259d8e-c8c3-408f-bca2-2c5a21a06266-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993360 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993370 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993383 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993395 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z2b9\" (UniqueName: \"kubernetes.io/projected/bd259d8e-c8c3-408f-bca2-2c5a21a06266-kube-api-access-2z2b9\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:16 crc kubenswrapper[4860]: I0121 21:36:16.993406 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.009140 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd259d8e-c8c3-408f-bca2-2c5a21a06266" (UID: "bd259d8e-c8c3-408f-bca2-2c5a21a06266"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.060186 4860 generic.go:334] "Generic (PLEG): container finished" podID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerID="960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204" exitCode=0 Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.060246 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.060272 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerDied","Data":"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204"} Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.060392 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd259d8e-c8c3-408f-bca2-2c5a21a06266","Type":"ContainerDied","Data":"f61e1912fe03e479ea515523c32b3ec4227d3ec5f3066c75d73db565376cb9be"} Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.060432 4860 scope.go:117] "RemoveContainer" containerID="2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.099188 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd259d8e-c8c3-408f-bca2-2c5a21a06266-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.102374 4860 scope.go:117] "RemoveContainer" containerID="ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.104139 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.118146 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.131653 4860 scope.go:117] "RemoveContainer" containerID="960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.139244 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.139809 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="sg-core" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.139843 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="sg-core" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.139854 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="proxy-httpd" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.139861 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="proxy-httpd" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.139877 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6edf2d-041a-4469-a456-cae342270655" containerName="keystone-api" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.139885 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6edf2d-041a-4469-a456-cae342270655" containerName="keystone-api" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.139908 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-notification-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.139914 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-notification-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.140181 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-central-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140196 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-central-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140441 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-central-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140461 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="ceilometer-notification-agent" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140476 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="proxy-httpd" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140492 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" containerName="sg-core" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.140502 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6edf2d-041a-4469-a456-cae342270655" containerName="keystone-api" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.142662 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.148441 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.148752 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.148904 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.162464 4860 scope.go:117] "RemoveContainer" containerID="1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.162704 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.201852 4860 scope.go:117] "RemoveContainer" containerID="2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.202731 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685\": container with ID starting with 2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685 not found: ID does not exist" containerID="2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.202777 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685"} err="failed to get container status \"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685\": rpc error: code = NotFound desc = could not find container \"2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685\": container with ID starting with 2ac1745de07e3ee5d53a3614b82916e56c379e74aeec44ca31ffacb7082b0685 not found: ID does not exist" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.202802 4860 scope.go:117] "RemoveContainer" containerID="ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.203222 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c\": container with ID starting with ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c not found: ID does not exist" containerID="ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.203239 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c"} err="failed to get container status \"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c\": rpc error: code = NotFound desc = could not find container \"ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c\": container with ID starting with ee8fee12e28c2aa2d905c3986069a496d404f632c2a144beae0e81f6f1c7ef4c not found: ID does not exist" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.203255 4860 scope.go:117] "RemoveContainer" containerID="960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.203564 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204\": container with ID starting with 960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204 not found: ID does not exist" containerID="960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.203589 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204"} err="failed to get container status \"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204\": rpc error: code = NotFound desc = could not find container \"960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204\": container with ID starting with 960178c2d92d1a184aabce3f684fc2561c2b880007601c30169ec136d49ad204 not found: ID does not exist" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.203601 4860 scope.go:117] "RemoveContainer" containerID="1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f" Jan 21 21:36:17 crc kubenswrapper[4860]: E0121 21:36:17.203845 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f\": container with ID starting with 1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f not found: ID does not exist" containerID="1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.203862 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f"} err="failed to get container status \"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f\": rpc error: code = NotFound desc = could not find container \"1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f\": container with ID starting with 1b02b9ad332f9a8145211ed71f42b2a6a3085b35f05be03c5b201c37c4dd4b4f not found: ID does not exist" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302633 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8j5\" (UniqueName: \"kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.302837 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.404383 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.404974 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405025 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405088 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405124 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.405165 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8j5\" (UniqueName: \"kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.406237 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.406546 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.411961 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.413581 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.415231 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.416684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.424898 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.429126 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8j5\" (UniqueName: \"kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5\") pod \"ceilometer-0\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:17 crc kubenswrapper[4860]: I0121 21:36:17.467879 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:18 crc kubenswrapper[4860]: I0121 21:36:18.307971 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:18 crc kubenswrapper[4860]: I0121 21:36:18.593786 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd259d8e-c8c3-408f-bca2-2c5a21a06266" path="/var/lib/kubelet/pods/bd259d8e-c8c3-408f-bca2-2c5a21a06266/volumes" Jan 21 21:36:19 crc kubenswrapper[4860]: I0121 21:36:19.303237 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerStarted","Data":"4eb595a454390cd5c11e3c09015ac53dc906291a734f14d7182e4c2936e55e21"} Jan 21 21:36:19 crc kubenswrapper[4860]: I0121 21:36:19.303636 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerStarted","Data":"486914bb7461c8d5af37a57bf459a261387fa644d9ae58223dbac6ea8bacc34a"} Jan 21 21:36:20 crc kubenswrapper[4860]: I0121 21:36:20.419316 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerStarted","Data":"607d18622e3922313a68ebc4c144a1f71beff8f95b6931e979e4d02fb1cbfa6b"} Jan 21 21:36:21 crc kubenswrapper[4860]: I0121 21:36:21.437824 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerStarted","Data":"212fde9efb37718e84c806158a938d367ae1d62b31fa1e89dc2529cb0dd07037"} Jan 21 21:36:25 crc kubenswrapper[4860]: I0121 21:36:25.579279 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:36:25 crc kubenswrapper[4860]: E0121 21:36:25.579690 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:32 crc kubenswrapper[4860]: I0121 21:36:32.544020 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerStarted","Data":"6a01133bd6306ae37f606eb618d10bc8a7ecbfbdd6dbd64097f73e2d27606a09"} Jan 21 21:36:32 crc kubenswrapper[4860]: I0121 21:36:32.545023 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:32 crc kubenswrapper[4860]: I0121 21:36:32.591998 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.091646923 podStartE2EDuration="15.591946472s" podCreationTimestamp="2026-01-21 21:36:17 +0000 UTC" firstStartedPulling="2026-01-21 21:36:18.329051199 +0000 UTC m=+1670.551229669" lastFinishedPulling="2026-01-21 21:36:31.829350748 +0000 UTC m=+1684.051529218" observedRunningTime="2026-01-21 21:36:32.590276541 +0000 UTC m=+1684.812455021" watchObservedRunningTime="2026-01-21 21:36:32.591946472 +0000 UTC m=+1684.814124942" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.224055 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.227548 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.252755 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.310789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.310896 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.311092 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qlm\" (UniqueName: \"kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.413787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.413947 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.414060 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6qlm\" (UniqueName: \"kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.414563 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.414640 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.460953 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6qlm\" (UniqueName: \"kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm\") pod \"community-operators-qbjgq\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:33 crc kubenswrapper[4860]: I0121 21:36:33.560768 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:34 crc kubenswrapper[4860]: I0121 21:36:34.175589 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:34 crc kubenswrapper[4860]: I0121 21:36:34.566441 4860 generic.go:334] "Generic (PLEG): container finished" podID="517c2f91-b000-48db-9db7-8ed857b995c8" containerID="6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b" exitCode=0 Jan 21 21:36:34 crc kubenswrapper[4860]: I0121 21:36:34.566844 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerDied","Data":"6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b"} Jan 21 21:36:34 crc kubenswrapper[4860]: I0121 21:36:34.566879 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerStarted","Data":"76316d5f020843461dd93476709aad5645a28174f1ff5765aff43531f691d553"} Jan 21 21:36:35 crc kubenswrapper[4860]: I0121 21:36:35.579366 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerStarted","Data":"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664"} Jan 21 21:36:36 crc kubenswrapper[4860]: I0121 21:36:36.580609 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:36:36 crc kubenswrapper[4860]: E0121 21:36:36.581872 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:36 crc kubenswrapper[4860]: I0121 21:36:36.598165 4860 generic.go:334] "Generic (PLEG): container finished" podID="517c2f91-b000-48db-9db7-8ed857b995c8" containerID="57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664" exitCode=0 Jan 21 21:36:36 crc kubenswrapper[4860]: I0121 21:36:36.598220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerDied","Data":"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664"} Jan 21 21:36:38 crc kubenswrapper[4860]: I0121 21:36:38.620832 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerStarted","Data":"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac"} Jan 21 21:36:38 crc kubenswrapper[4860]: I0121 21:36:38.659709 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qbjgq" podStartSLOduration=2.796395559 podStartE2EDuration="5.659677248s" podCreationTimestamp="2026-01-21 21:36:33 +0000 UTC" firstStartedPulling="2026-01-21 21:36:34.569099602 +0000 UTC m=+1686.791278072" lastFinishedPulling="2026-01-21 21:36:37.432381291 +0000 UTC m=+1689.654559761" observedRunningTime="2026-01-21 21:36:38.650147104 +0000 UTC m=+1690.872325574" watchObservedRunningTime="2026-01-21 21:36:38.659677248 +0000 UTC m=+1690.881855748" Jan 21 21:36:43 crc kubenswrapper[4860]: I0121 21:36:43.561913 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:43 crc kubenswrapper[4860]: I0121 21:36:43.562505 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:43 crc kubenswrapper[4860]: I0121 21:36:43.632318 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:43 crc kubenswrapper[4860]: I0121 21:36:43.751751 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.431021 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.438765 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bhqxs"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.599371 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b92928-b56a-4621-8959-594cd055b50b" path="/var/lib/kubelet/pods/71b92928-b56a-4621-8959-594cd055b50b/volumes" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.604764 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.605183 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-kuttl-api-log" containerID="cri-o://9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261" gracePeriod=30 Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.605402 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-api" containerID="cri-o://aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff" gracePeriod=30 Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.643149 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher4ccb-account-delete-gzbfj"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.645065 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.661027 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.661397 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="068f0a99-9308-4095-b015-9c13638ca80b" containerName="watcher-decision-engine" containerID="cri-o://0ecf0f3af20536e566b4f2098aa6e189687e99a9cb36f97c35125bf2a4760b53" gracePeriod=30 Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.686415 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4ccb-account-delete-gzbfj"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.695996 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.696668 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerName="watcher-applier" containerID="cri-o://854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" gracePeriod=30 Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.825753 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.825882 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4hv\" (UniqueName: \"kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.927817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4hv\" (UniqueName: \"kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.928177 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.929281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.961698 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4hv\" (UniqueName: \"kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv\") pod \"watcher4ccb-account-delete-gzbfj\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:44 crc kubenswrapper[4860]: I0121 21:36:44.998860 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:45 crc kubenswrapper[4860]: I0121 21:36:45.696682 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4ccb-account-delete-gzbfj"] Jan 21 21:36:45 crc kubenswrapper[4860]: I0121 21:36:45.769356 4860 generic.go:334] "Generic (PLEG): container finished" podID="8d92a54a-c04b-4854-8187-34696c121452" containerID="9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261" exitCode=143 Jan 21 21:36:45 crc kubenswrapper[4860]: I0121 21:36:45.769493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerDied","Data":"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261"} Jan 21 21:36:45 crc kubenswrapper[4860]: I0121 21:36:45.770919 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" event={"ID":"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe","Type":"ContainerStarted","Data":"10262e4f0d36f0426d1fa01caa66ef63e9e0bfaef0bf1934e57fc4dbfe717598"} Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.414003 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.562686 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.562837 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.562914 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.562992 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.563043 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.563124 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69tbl\" (UniqueName: \"kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl\") pod \"8d92a54a-c04b-4854-8187-34696c121452\" (UID: \"8d92a54a-c04b-4854-8187-34696c121452\") " Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.563312 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs" (OuterVolumeSpecName: "logs") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.564457 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d92a54a-c04b-4854-8187-34696c121452-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.570528 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl" (OuterVolumeSpecName: "kube-api-access-69tbl") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "kube-api-access-69tbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.613075 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.620133 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.641557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data" (OuterVolumeSpecName: "config-data") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.666486 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.666555 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.666564 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.666580 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69tbl\" (UniqueName: \"kubernetes.io/projected/8d92a54a-c04b-4854-8187-34696c121452-kube-api-access-69tbl\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.684611 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8d92a54a-c04b-4854-8187-34696c121452" (UID: "8d92a54a-c04b-4854-8187-34696c121452"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.769381 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d92a54a-c04b-4854-8187-34696c121452-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.792201 4860 generic.go:334] "Generic (PLEG): container finished" podID="334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" containerID="ef6b20df4e06c4af3291b9f66b14c808adfecf2e7159dddada94cba8f1aa798e" exitCode=0 Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.792328 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" event={"ID":"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe","Type":"ContainerDied","Data":"ef6b20df4e06c4af3291b9f66b14c808adfecf2e7159dddada94cba8f1aa798e"} Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.795218 4860 generic.go:334] "Generic (PLEG): container finished" podID="068f0a99-9308-4095-b015-9c13638ca80b" containerID="0ecf0f3af20536e566b4f2098aa6e189687e99a9cb36f97c35125bf2a4760b53" exitCode=0 Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.795313 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"068f0a99-9308-4095-b015-9c13638ca80b","Type":"ContainerDied","Data":"0ecf0f3af20536e566b4f2098aa6e189687e99a9cb36f97c35125bf2a4760b53"} Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.797797 4860 generic.go:334] "Generic (PLEG): container finished" podID="8d92a54a-c04b-4854-8187-34696c121452" containerID="aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff" exitCode=0 Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.797833 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerDied","Data":"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff"} Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.797865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8d92a54a-c04b-4854-8187-34696c121452","Type":"ContainerDied","Data":"5e9b5fa13804e80ac0a3c024185f3e92334f0999be633c49d194979b3dcf0df8"} Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.797892 4860 scope.go:117] "RemoveContainer" containerID="aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.798083 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.850595 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.878755 4860 scope.go:117] "RemoveContainer" containerID="9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.903848 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.933416 4860 scope.go:117] "RemoveContainer" containerID="aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff" Jan 21 21:36:46 crc kubenswrapper[4860]: E0121 21:36:46.934208 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff\": container with ID starting with aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff not found: ID does not exist" containerID="aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.934263 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff"} err="failed to get container status \"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff\": rpc error: code = NotFound desc = could not find container \"aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff\": container with ID starting with aa1ddaa32d48c12615e531f6ce234a48ea17ffdacf36d6956ffdbc0489de9cff not found: ID does not exist" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.934295 4860 scope.go:117] "RemoveContainer" containerID="9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261" Jan 21 21:36:46 crc kubenswrapper[4860]: E0121 21:36:46.935779 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261\": container with ID starting with 9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261 not found: ID does not exist" containerID="9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261" Jan 21 21:36:46 crc kubenswrapper[4860]: I0121 21:36:46.935808 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261"} err="failed to get container status \"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261\": rpc error: code = NotFound desc = could not find container \"9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261\": container with ID starting with 9f143c066c5dce02bec7ffc87dc0f2f19c757cad5c3a227672f820fd55224261 not found: ID does not exist" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.206243 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.206603 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qbjgq" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="registry-server" containerID="cri-o://badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac" gracePeriod=2 Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.215172 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.224841 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.224963 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7rp\" (UniqueName: \"kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.225018 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.225131 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.225239 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.225300 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls\") pod \"068f0a99-9308-4095-b015-9c13638ca80b\" (UID: \"068f0a99-9308-4095-b015-9c13638ca80b\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.225996 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs" (OuterVolumeSpecName: "logs") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.232986 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp" (OuterVolumeSpecName: "kube-api-access-4d7rp") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "kube-api-access-4d7rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.274195 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.293185 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.295081 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data" (OuterVolumeSpecName: "config-data") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.299922 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "068f0a99-9308-4095-b015-9c13638ca80b" (UID: "068f0a99-9308-4095-b015-9c13638ca80b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.327876 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.327965 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.327985 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7rp\" (UniqueName: \"kubernetes.io/projected/068f0a99-9308-4095-b015-9c13638ca80b-kube-api-access-4d7rp\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.328006 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.328022 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068f0a99-9308-4095-b015-9c13638ca80b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.328036 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068f0a99-9308-4095-b015-9c13638ca80b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.483640 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.810715 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.810709 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"068f0a99-9308-4095-b015-9c13638ca80b","Type":"ContainerDied","Data":"ad85e8d22aa8518b992b8e72b2060becb11bed35e3e17830bc6170f10da4e230"} Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.811434 4860 scope.go:117] "RemoveContainer" containerID="0ecf0f3af20536e566b4f2098aa6e189687e99a9cb36f97c35125bf2a4760b53" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.812817 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.823356 4860 generic.go:334] "Generic (PLEG): container finished" podID="517c2f91-b000-48db-9db7-8ed857b995c8" containerID="badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac" exitCode=0 Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.823482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerDied","Data":"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac"} Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.823617 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbjgq" event={"ID":"517c2f91-b000-48db-9db7-8ed857b995c8","Type":"ContainerDied","Data":"76316d5f020843461dd93476709aad5645a28174f1ff5765aff43531f691d553"} Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.849355 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6qlm\" (UniqueName: \"kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm\") pod \"517c2f91-b000-48db-9db7-8ed857b995c8\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.849434 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content\") pod \"517c2f91-b000-48db-9db7-8ed857b995c8\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.849515 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities\") pod \"517c2f91-b000-48db-9db7-8ed857b995c8\" (UID: \"517c2f91-b000-48db-9db7-8ed857b995c8\") " Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.851387 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities" (OuterVolumeSpecName: "utilities") pod "517c2f91-b000-48db-9db7-8ed857b995c8" (UID: "517c2f91-b000-48db-9db7-8ed857b995c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.854593 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.867139 4860 scope.go:117] "RemoveContainer" containerID="badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.876086 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.877278 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm" (OuterVolumeSpecName: "kube-api-access-f6qlm") pod "517c2f91-b000-48db-9db7-8ed857b995c8" (UID: "517c2f91-b000-48db-9db7-8ed857b995c8"). InnerVolumeSpecName "kube-api-access-f6qlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.885826 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.908658 4860 scope.go:117] "RemoveContainer" containerID="57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.923450 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "517c2f91-b000-48db-9db7-8ed857b995c8" (UID: "517c2f91-b000-48db-9db7-8ed857b995c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.936789 4860 scope.go:117] "RemoveContainer" containerID="6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.962527 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6qlm\" (UniqueName: \"kubernetes.io/projected/517c2f91-b000-48db-9db7-8ed857b995c8-kube-api-access-f6qlm\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.962578 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c2f91-b000-48db-9db7-8ed857b995c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.979991 4860 scope.go:117] "RemoveContainer" containerID="badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac" Jan 21 21:36:47 crc kubenswrapper[4860]: E0121 21:36:47.981118 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac\": container with ID starting with badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac not found: ID does not exist" containerID="badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.981199 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac"} err="failed to get container status \"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac\": rpc error: code = NotFound desc = could not find container \"badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac\": container with ID starting with badf67809505046804df1d341c8a6572042d956a4a17980353302b9f00615dac not found: ID does not exist" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.981253 4860 scope.go:117] "RemoveContainer" containerID="57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664" Jan 21 21:36:47 crc kubenswrapper[4860]: E0121 21:36:47.981829 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664\": container with ID starting with 57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664 not found: ID does not exist" containerID="57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.981890 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664"} err="failed to get container status \"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664\": rpc error: code = NotFound desc = could not find container \"57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664\": container with ID starting with 57fcb1a4a2709012c9557a13329df9306542a42f0adfcd765e9767175ce51664 not found: ID does not exist" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.981945 4860 scope.go:117] "RemoveContainer" containerID="6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b" Jan 21 21:36:47 crc kubenswrapper[4860]: E0121 21:36:47.982463 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b\": container with ID starting with 6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b not found: ID does not exist" containerID="6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b" Jan 21 21:36:47 crc kubenswrapper[4860]: I0121 21:36:47.982485 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b"} err="failed to get container status \"6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b\": rpc error: code = NotFound desc = could not find container \"6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b\": container with ID starting with 6c0c5b1c9923f5e61e558d9e13231e451cf93f8c44a054712f5bd58d660b5a4b not found: ID does not exist" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.069400 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.069723 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-central-agent" containerID="cri-o://4eb595a454390cd5c11e3c09015ac53dc906291a734f14d7182e4c2936e55e21" gracePeriod=30 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.069799 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="proxy-httpd" containerID="cri-o://6a01133bd6306ae37f606eb618d10bc8a7ecbfbdd6dbd64097f73e2d27606a09" gracePeriod=30 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.069906 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-notification-agent" containerID="cri-o://607d18622e3922313a68ebc4c144a1f71beff8f95b6931e979e4d02fb1cbfa6b" gracePeriod=30 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.069960 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="sg-core" containerID="cri-o://212fde9efb37718e84c806158a938d367ae1d62b31fa1e89dc2529cb0dd07037" gracePeriod=30 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.237489 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.268182 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts\") pod \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.268502 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf4hv\" (UniqueName: \"kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv\") pod \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\" (UID: \"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe\") " Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.269096 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" (UID: "334fb0f4-eb8f-4da4-a69b-35c71ea5eebe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.272349 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv" (OuterVolumeSpecName: "kube-api-access-qf4hv") pod "334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" (UID: "334fb0f4-eb8f-4da4-a69b-35c71ea5eebe"). InnerVolumeSpecName "kube-api-access-qf4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.371479 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf4hv\" (UniqueName: \"kubernetes.io/projected/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-kube-api-access-qf4hv\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.371973 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.595893 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068f0a99-9308-4095-b015-9c13638ca80b" path="/var/lib/kubelet/pods/068f0a99-9308-4095-b015-9c13638ca80b/volumes" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.596589 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d92a54a-c04b-4854-8187-34696c121452" path="/var/lib/kubelet/pods/8d92a54a-c04b-4854-8187-34696c121452/volumes" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.835481 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" event={"ID":"334fb0f4-eb8f-4da4-a69b-35c71ea5eebe","Type":"ContainerDied","Data":"10262e4f0d36f0426d1fa01caa66ef63e9e0bfaef0bf1934e57fc4dbfe717598"} Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.835554 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4ccb-account-delete-gzbfj" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.835572 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10262e4f0d36f0426d1fa01caa66ef63e9e0bfaef0bf1934e57fc4dbfe717598" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.839925 4860 generic.go:334] "Generic (PLEG): container finished" podID="41261613-b288-4f45-bfea-3400abcd5ae9" containerID="6a01133bd6306ae37f606eb618d10bc8a7ecbfbdd6dbd64097f73e2d27606a09" exitCode=0 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.839993 4860 generic.go:334] "Generic (PLEG): container finished" podID="41261613-b288-4f45-bfea-3400abcd5ae9" containerID="212fde9efb37718e84c806158a938d367ae1d62b31fa1e89dc2529cb0dd07037" exitCode=2 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.840010 4860 generic.go:334] "Generic (PLEG): container finished" podID="41261613-b288-4f45-bfea-3400abcd5ae9" containerID="4eb595a454390cd5c11e3c09015ac53dc906291a734f14d7182e4c2936e55e21" exitCode=0 Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.839979 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerDied","Data":"6a01133bd6306ae37f606eb618d10bc8a7ecbfbdd6dbd64097f73e2d27606a09"} Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.840115 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerDied","Data":"212fde9efb37718e84c806158a938d367ae1d62b31fa1e89dc2529cb0dd07037"} Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.840134 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerDied","Data":"4eb595a454390cd5c11e3c09015ac53dc906291a734f14d7182e4c2936e55e21"} Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.841415 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbjgq" Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.866846 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:48 crc kubenswrapper[4860]: I0121 21:36:48.875430 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qbjgq"] Jan 21 21:36:48 crc kubenswrapper[4860]: E0121 21:36:48.921359 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:36:48 crc kubenswrapper[4860]: E0121 21:36:48.922915 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:36:48 crc kubenswrapper[4860]: E0121 21:36:48.924744 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:36:48 crc kubenswrapper[4860]: E0121 21:36:48.924904 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerName="watcher-applier" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.484578 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.597290 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs\") pod \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.597652 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76d86\" (UniqueName: \"kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86\") pod \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.597843 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data\") pod \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.598006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle\") pod \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.598145 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls\") pod \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\" (UID: \"d4d0733a-5369-4bee-98b5-44f2d588ccf7\") " Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.598982 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs" (OuterVolumeSpecName: "logs") pod "d4d0733a-5369-4bee-98b5-44f2d588ccf7" (UID: "d4d0733a-5369-4bee-98b5-44f2d588ccf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.599604 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d0733a-5369-4bee-98b5-44f2d588ccf7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.607443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86" (OuterVolumeSpecName: "kube-api-access-76d86") pod "d4d0733a-5369-4bee-98b5-44f2d588ccf7" (UID: "d4d0733a-5369-4bee-98b5-44f2d588ccf7"). InnerVolumeSpecName "kube-api-access-76d86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.693213 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-cdv2f"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.699670 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d0733a-5369-4bee-98b5-44f2d588ccf7" (UID: "d4d0733a-5369-4bee-98b5-44f2d588ccf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.705853 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76d86\" (UniqueName: \"kubernetes.io/projected/d4d0733a-5369-4bee-98b5-44f2d588ccf7-kube-api-access-76d86\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.706225 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.706967 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data" (OuterVolumeSpecName: "config-data") pod "d4d0733a-5369-4bee-98b5-44f2d588ccf7" (UID: "d4d0733a-5369-4bee-98b5-44f2d588ccf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.723076 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-cdv2f"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.735508 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher4ccb-account-delete-gzbfj"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.742504 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d4d0733a-5369-4bee-98b5-44f2d588ccf7" (UID: "d4d0733a-5369-4bee-98b5-44f2d588ccf7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.743190 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher4ccb-account-delete-gzbfj"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.752443 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.766089 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-4ccb-account-create-update-59jc8"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.808811 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.808874 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4d0733a-5369-4bee-98b5-44f2d588ccf7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.852668 4860 generic.go:334] "Generic (PLEG): container finished" podID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" exitCode=0 Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.852737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4d0733a-5369-4bee-98b5-44f2d588ccf7","Type":"ContainerDied","Data":"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f"} Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.852774 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4d0733a-5369-4bee-98b5-44f2d588ccf7","Type":"ContainerDied","Data":"fac690d18965d1f8c0b6a617dffc2541864e4e57731fbddfc044cca198b4f593"} Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.852795 4860 scope.go:117] "RemoveContainer" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.852836 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.925282 4860 scope.go:117] "RemoveContainer" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" Jan 21 21:36:49 crc kubenswrapper[4860]: E0121 21:36:49.929592 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f\": container with ID starting with 854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f not found: ID does not exist" containerID="854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.929679 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f"} err="failed to get container status \"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f\": rpc error: code = NotFound desc = could not find container \"854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f\": container with ID starting with 854fbcede377ac4b117c35b71dd270743f74fa1117ceff90a7a300b38dfd833f not found: ID does not exist" Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.977656 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:36:49 crc kubenswrapper[4860]: I0121 21:36:49.986953 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.579677 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:36:50 crc kubenswrapper[4860]: E0121 21:36:50.580783 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.590925 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3d99bd-a6ee-4d83-b601-41d63c5d408c" path="/var/lib/kubelet/pods/1d3d99bd-a6ee-4d83-b601-41d63c5d408c/volumes" Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.591588 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" path="/var/lib/kubelet/pods/334fb0f4-eb8f-4da4-a69b-35c71ea5eebe/volumes" Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.592309 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbf5887-97be-4c75-8de2-81b1484db9f5" path="/var/lib/kubelet/pods/4fbf5887-97be-4c75-8de2-81b1484db9f5/volumes" Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.593033 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" path="/var/lib/kubelet/pods/517c2f91-b000-48db-9db7-8ed857b995c8/volumes" Jan 21 21:36:50 crc kubenswrapper[4860]: I0121 21:36:50.594598 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" path="/var/lib/kubelet/pods/d4d0733a-5369-4bee-98b5-44f2d588ccf7/volumes" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.126203 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-6zs2l"] Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.126895 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-api" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.126916 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-api" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.126960 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="extract-utilities" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.126970 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="extract-utilities" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.126995 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="registry-server" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127009 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="registry-server" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.127027 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerName="watcher-applier" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127038 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerName="watcher-applier" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.127058 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-kuttl-api-log" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127079 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-kuttl-api-log" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.127096 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="extract-content" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127105 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="extract-content" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.127116 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068f0a99-9308-4095-b015-9c13638ca80b" containerName="watcher-decision-engine" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127126 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="068f0a99-9308-4095-b015-9c13638ca80b" containerName="watcher-decision-engine" Jan 21 21:36:52 crc kubenswrapper[4860]: E0121 21:36:52.127147 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" containerName="mariadb-account-delete" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" containerName="mariadb-account-delete" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127415 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="517c2f91-b000-48db-9db7-8ed857b995c8" containerName="registry-server" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127435 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="068f0a99-9308-4095-b015-9c13638ca80b" containerName="watcher-decision-engine" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127449 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d0733a-5369-4bee-98b5-44f2d588ccf7" containerName="watcher-applier" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127461 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-kuttl-api-log" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127475 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="334fb0f4-eb8f-4da4-a69b-35c71ea5eebe" containerName="mariadb-account-delete" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.127492 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d92a54a-c04b-4854-8187-34696c121452" containerName="watcher-api" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.128474 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.140604 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-15cb-account-create-update-6blqv"] Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.142410 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.144650 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.149432 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6zs2l"] Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.158226 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87t6\" (UniqueName: \"kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.158718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.159013 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntgkq\" (UniqueName: \"kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.159085 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.189975 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-15cb-account-create-update-6blqv"] Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.262474 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.262582 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntgkq\" (UniqueName: \"kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.262628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.262706 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k87t6\" (UniqueName: \"kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.264144 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.264519 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.288150 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k87t6\" (UniqueName: \"kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6\") pod \"watcher-db-create-6zs2l\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.300339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntgkq\" (UniqueName: \"kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq\") pod \"watcher-15cb-account-create-update-6blqv\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.448562 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.489837 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.896881 4860 generic.go:334] "Generic (PLEG): container finished" podID="41261613-b288-4f45-bfea-3400abcd5ae9" containerID="607d18622e3922313a68ebc4c144a1f71beff8f95b6931e979e4d02fb1cbfa6b" exitCode=0 Jan 21 21:36:52 crc kubenswrapper[4860]: I0121 21:36:52.897082 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerDied","Data":"607d18622e3922313a68ebc4c144a1f71beff8f95b6931e979e4d02fb1cbfa6b"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.046605 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6zs2l"] Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.168466 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.184703 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-15cb-account-create-update-6blqv"] Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.285299 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.287627 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.288672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.288813 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.289002 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx8j5\" (UniqueName: \"kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.289142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.289305 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.289492 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts\") pod \"41261613-b288-4f45-bfea-3400abcd5ae9\" (UID: \"41261613-b288-4f45-bfea-3400abcd5ae9\") " Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.287462 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.288501 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.367417 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5" (OuterVolumeSpecName: "kube-api-access-hx8j5") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "kube-api-access-hx8j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.367581 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts" (OuterVolumeSpecName: "scripts") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.399520 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.399562 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.399576 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41261613-b288-4f45-bfea-3400abcd5ae9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.399629 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx8j5\" (UniqueName: \"kubernetes.io/projected/41261613-b288-4f45-bfea-3400abcd5ae9-kube-api-access-hx8j5\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.418506 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.452095 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.501508 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.501555 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.552154 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.571384 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data" (OuterVolumeSpecName: "config-data") pod "41261613-b288-4f45-bfea-3400abcd5ae9" (UID: "41261613-b288-4f45-bfea-3400abcd5ae9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.603104 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.603486 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41261613-b288-4f45-bfea-3400abcd5ae9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.924627 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" event={"ID":"d178a458-cfdf-4958-b93d-d5618868c282","Type":"ContainerStarted","Data":"8c949e0c7c73efdc0596e3edfa97e23f73af87b4f52efa4d7ace2c8b451b7bd1"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.925301 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" event={"ID":"d178a458-cfdf-4958-b93d-d5618868c282","Type":"ContainerStarted","Data":"f11a158d256b450c64fa9d75a35674a8b88fe5cd4fc3de8ab7faa9d2096691c2"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.932471 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6zs2l" event={"ID":"feb7cf6d-b0da-4a06-b15f-0aebc81e5861","Type":"ContainerStarted","Data":"49150c25601e8eeee59a0c099f7b71262d286ef400b00c316a4aa556a05f68da"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.932541 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6zs2l" event={"ID":"feb7cf6d-b0da-4a06-b15f-0aebc81e5861","Type":"ContainerStarted","Data":"67999610da7412c83644fa52abf2b98ed5344beffb1ace285bea9dff50c9d928"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.945540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41261613-b288-4f45-bfea-3400abcd5ae9","Type":"ContainerDied","Data":"486914bb7461c8d5af37a57bf459a261387fa644d9ae58223dbac6ea8bacc34a"} Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.945631 4860 scope.go:117] "RemoveContainer" containerID="6a01133bd6306ae37f606eb618d10bc8a7ecbfbdd6dbd64097f73e2d27606a09" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.945854 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.948688 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" podStartSLOduration=1.948647147 podStartE2EDuration="1.948647147s" podCreationTimestamp="2026-01-21 21:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:36:53.946401997 +0000 UTC m=+1706.168580487" watchObservedRunningTime="2026-01-21 21:36:53.948647147 +0000 UTC m=+1706.170825627" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.997901 4860 scope.go:117] "RemoveContainer" containerID="212fde9efb37718e84c806158a938d367ae1d62b31fa1e89dc2529cb0dd07037" Jan 21 21:36:53 crc kubenswrapper[4860]: I0121 21:36:53.999434 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-6zs2l" podStartSLOduration=1.9993827899999999 podStartE2EDuration="1.99938279s" podCreationTimestamp="2026-01-21 21:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:36:53.981172728 +0000 UTC m=+1706.203351228" watchObservedRunningTime="2026-01-21 21:36:53.99938279 +0000 UTC m=+1706.221561270" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.034056 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.038174 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.052435 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:54 crc kubenswrapper[4860]: E0121 21:36:54.052891 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-central-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.052914 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-central-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: E0121 21:36:54.052923 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="sg-core" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.052962 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="sg-core" Jan 21 21:36:54 crc kubenswrapper[4860]: E0121 21:36:54.052998 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="proxy-httpd" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053008 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="proxy-httpd" Jan 21 21:36:54 crc kubenswrapper[4860]: E0121 21:36:54.053018 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-notification-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053024 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-notification-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053427 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-central-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053449 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="sg-core" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053461 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="proxy-httpd" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.053474 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" containerName="ceilometer-notification-agent" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.055396 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.059352 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.059444 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.060154 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.078726 4860 scope.go:117] "RemoveContainer" containerID="607d18622e3922313a68ebc4c144a1f71beff8f95b6931e979e4d02fb1cbfa6b" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.108874 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114531 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114633 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114656 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114746 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114780 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.114814 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47ks\" (UniqueName: \"kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.143236 4860 scope.go:117] "RemoveContainer" containerID="4eb595a454390cd5c11e3c09015ac53dc906291a734f14d7182e4c2936e55e21" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.216863 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47ks\" (UniqueName: \"kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.216963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217017 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217043 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217064 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217101 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217130 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.217717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.219457 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.225387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.225668 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.226789 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.227022 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.230205 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.241636 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47ks\" (UniqueName: \"kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks\") pod \"ceilometer-0\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.431012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.591641 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41261613-b288-4f45-bfea-3400abcd5ae9" path="/var/lib/kubelet/pods/41261613-b288-4f45-bfea-3400abcd5ae9/volumes" Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.942400 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:36:54 crc kubenswrapper[4860]: W0121 21:36:54.952448 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5591e3_e2bd_40a9_b207_6fd48c26a725.slice/crio-25ec71c7d96ef918aa786b9b548682bccce5f41f81f0dfa4c2dbbe92e9069b04 WatchSource:0}: Error finding container 25ec71c7d96ef918aa786b9b548682bccce5f41f81f0dfa4c2dbbe92e9069b04: Status 404 returned error can't find the container with id 25ec71c7d96ef918aa786b9b548682bccce5f41f81f0dfa4c2dbbe92e9069b04 Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.970009 4860 generic.go:334] "Generic (PLEG): container finished" podID="d178a458-cfdf-4958-b93d-d5618868c282" containerID="8c949e0c7c73efdc0596e3edfa97e23f73af87b4f52efa4d7ace2c8b451b7bd1" exitCode=0 Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.970421 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" event={"ID":"d178a458-cfdf-4958-b93d-d5618868c282","Type":"ContainerDied","Data":"8c949e0c7c73efdc0596e3edfa97e23f73af87b4f52efa4d7ace2c8b451b7bd1"} Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.974523 4860 generic.go:334] "Generic (PLEG): container finished" podID="feb7cf6d-b0da-4a06-b15f-0aebc81e5861" containerID="49150c25601e8eeee59a0c099f7b71262d286ef400b00c316a4aa556a05f68da" exitCode=0 Jan 21 21:36:54 crc kubenswrapper[4860]: I0121 21:36:54.974567 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6zs2l" event={"ID":"feb7cf6d-b0da-4a06-b15f-0aebc81e5861","Type":"ContainerDied","Data":"49150c25601e8eeee59a0c099f7b71262d286ef400b00c316a4aa556a05f68da"} Jan 21 21:36:55 crc kubenswrapper[4860]: I0121 21:36:55.998570 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerStarted","Data":"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8"} Jan 21 21:36:55 crc kubenswrapper[4860]: I0121 21:36:55.998648 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerStarted","Data":"25ec71c7d96ef918aa786b9b548682bccce5f41f81f0dfa4c2dbbe92e9069b04"} Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.439998 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.464336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts\") pod \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.464682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k87t6\" (UniqueName: \"kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6\") pod \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\" (UID: \"feb7cf6d-b0da-4a06-b15f-0aebc81e5861\") " Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.465343 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb7cf6d-b0da-4a06-b15f-0aebc81e5861" (UID: "feb7cf6d-b0da-4a06-b15f-0aebc81e5861"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.472207 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6" (OuterVolumeSpecName: "kube-api-access-k87t6") pod "feb7cf6d-b0da-4a06-b15f-0aebc81e5861" (UID: "feb7cf6d-b0da-4a06-b15f-0aebc81e5861"). InnerVolumeSpecName "kube-api-access-k87t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.523057 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.566633 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgkq\" (UniqueName: \"kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq\") pod \"d178a458-cfdf-4958-b93d-d5618868c282\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.567299 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts\") pod \"d178a458-cfdf-4958-b93d-d5618868c282\" (UID: \"d178a458-cfdf-4958-b93d-d5618868c282\") " Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.567881 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k87t6\" (UniqueName: \"kubernetes.io/projected/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-kube-api-access-k87t6\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.567908 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb7cf6d-b0da-4a06-b15f-0aebc81e5861-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.568325 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d178a458-cfdf-4958-b93d-d5618868c282" (UID: "d178a458-cfdf-4958-b93d-d5618868c282"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.588491 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq" (OuterVolumeSpecName: "kube-api-access-ntgkq") pod "d178a458-cfdf-4958-b93d-d5618868c282" (UID: "d178a458-cfdf-4958-b93d-d5618868c282"). InnerVolumeSpecName "kube-api-access-ntgkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.681421 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d178a458-cfdf-4958-b93d-d5618868c282-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:56 crc kubenswrapper[4860]: I0121 21:36:56.681503 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntgkq\" (UniqueName: \"kubernetes.io/projected/d178a458-cfdf-4958-b93d-d5618868c282-kube-api-access-ntgkq\") on node \"crc\" DevicePath \"\"" Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.024980 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerStarted","Data":"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217"} Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.037203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" event={"ID":"d178a458-cfdf-4958-b93d-d5618868c282","Type":"ContainerDied","Data":"f11a158d256b450c64fa9d75a35674a8b88fe5cd4fc3de8ab7faa9d2096691c2"} Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.037266 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11a158d256b450c64fa9d75a35674a8b88fe5cd4fc3de8ab7faa9d2096691c2" Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.037350 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-15cb-account-create-update-6blqv" Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.048632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6zs2l" event={"ID":"feb7cf6d-b0da-4a06-b15f-0aebc81e5861","Type":"ContainerDied","Data":"67999610da7412c83644fa52abf2b98ed5344beffb1ace285bea9dff50c9d928"} Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.048709 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67999610da7412c83644fa52abf2b98ed5344beffb1ace285bea9dff50c9d928" Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.048818 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6zs2l" Jan 21 21:36:57 crc kubenswrapper[4860]: I0121 21:36:57.174659 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:36:58 crc kubenswrapper[4860]: I0121 21:36:58.061404 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerStarted","Data":"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f"} Jan 21 21:36:59 crc kubenswrapper[4860]: I0121 21:36:59.105841 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerStarted","Data":"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31"} Jan 21 21:36:59 crc kubenswrapper[4860]: I0121 21:36:59.106314 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:36:59 crc kubenswrapper[4860]: I0121 21:36:59.139640 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.634893537 podStartE2EDuration="5.13959839s" podCreationTimestamp="2026-01-21 21:36:54 +0000 UTC" firstStartedPulling="2026-01-21 21:36:54.956188926 +0000 UTC m=+1707.178367396" lastFinishedPulling="2026-01-21 21:36:58.460893779 +0000 UTC m=+1710.683072249" observedRunningTime="2026-01-21 21:36:59.134541994 +0000 UTC m=+1711.356720474" watchObservedRunningTime="2026-01-21 21:36:59.13959839 +0000 UTC m=+1711.361776860" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.709071 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg"] Jan 21 21:37:02 crc kubenswrapper[4860]: E0121 21:37:02.711060 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d178a458-cfdf-4958-b93d-d5618868c282" containerName="mariadb-account-create-update" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.712668 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d178a458-cfdf-4958-b93d-d5618868c282" containerName="mariadb-account-create-update" Jan 21 21:37:02 crc kubenswrapper[4860]: E0121 21:37:02.712849 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7cf6d-b0da-4a06-b15f-0aebc81e5861" containerName="mariadb-database-create" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.712924 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7cf6d-b0da-4a06-b15f-0aebc81e5861" containerName="mariadb-database-create" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.713211 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb7cf6d-b0da-4a06-b15f-0aebc81e5861" containerName="mariadb-database-create" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.713311 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d178a458-cfdf-4958-b93d-d5618868c282" containerName="mariadb-account-create-update" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.714101 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.720368 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nnsqz" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.720397 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.724090 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg"] Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.808703 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.808840 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5vtc\" (UniqueName: \"kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.809018 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.809386 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.912495 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.912670 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.912731 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5vtc\" (UniqueName: \"kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.912801 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.925616 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.930309 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.936716 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5vtc\" (UniqueName: \"kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:02 crc kubenswrapper[4860]: I0121 21:37:02.945756 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data\") pod \"watcher-kuttl-db-sync-tlbqg\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:03 crc kubenswrapper[4860]: I0121 21:37:03.050424 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:03 crc kubenswrapper[4860]: I0121 21:37:03.678683 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg"] Jan 21 21:37:04 crc kubenswrapper[4860]: I0121 21:37:04.152615 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" event={"ID":"503f6417-9273-4609-850f-64ce2e41caad","Type":"ContainerStarted","Data":"2873a1e236edd2c5e97ea43c6121f7a5b206043c9c00c401d62d58dcc42b50db"} Jan 21 21:37:04 crc kubenswrapper[4860]: I0121 21:37:04.153052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" event={"ID":"503f6417-9273-4609-850f-64ce2e41caad","Type":"ContainerStarted","Data":"fc6bc1ee47a5b885b117a8d6c4a58df830f9d52179d598ece2221eab0739fd36"} Jan 21 21:37:04 crc kubenswrapper[4860]: I0121 21:37:04.176610 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" podStartSLOduration=2.176590243 podStartE2EDuration="2.176590243s" podCreationTimestamp="2026-01-21 21:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:04.173450046 +0000 UTC m=+1716.395628506" watchObservedRunningTime="2026-01-21 21:37:04.176590243 +0000 UTC m=+1716.398768713" Jan 21 21:37:04 crc kubenswrapper[4860]: I0121 21:37:04.579706 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:37:04 crc kubenswrapper[4860]: E0121 21:37:04.580067 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:37:07 crc kubenswrapper[4860]: I0121 21:37:07.186712 4860 generic.go:334] "Generic (PLEG): container finished" podID="503f6417-9273-4609-850f-64ce2e41caad" containerID="2873a1e236edd2c5e97ea43c6121f7a5b206043c9c00c401d62d58dcc42b50db" exitCode=0 Jan 21 21:37:07 crc kubenswrapper[4860]: I0121 21:37:07.186799 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" event={"ID":"503f6417-9273-4609-850f-64ce2e41caad","Type":"ContainerDied","Data":"2873a1e236edd2c5e97ea43c6121f7a5b206043c9c00c401d62d58dcc42b50db"} Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.559267 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.631772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data\") pod \"503f6417-9273-4609-850f-64ce2e41caad\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.631871 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle\") pod \"503f6417-9273-4609-850f-64ce2e41caad\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.631981 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data\") pod \"503f6417-9273-4609-850f-64ce2e41caad\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.632090 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5vtc\" (UniqueName: \"kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc\") pod \"503f6417-9273-4609-850f-64ce2e41caad\" (UID: \"503f6417-9273-4609-850f-64ce2e41caad\") " Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.639005 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc" (OuterVolumeSpecName: "kube-api-access-p5vtc") pod "503f6417-9273-4609-850f-64ce2e41caad" (UID: "503f6417-9273-4609-850f-64ce2e41caad"). InnerVolumeSpecName "kube-api-access-p5vtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.639426 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "503f6417-9273-4609-850f-64ce2e41caad" (UID: "503f6417-9273-4609-850f-64ce2e41caad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.660746 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "503f6417-9273-4609-850f-64ce2e41caad" (UID: "503f6417-9273-4609-850f-64ce2e41caad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.685765 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data" (OuterVolumeSpecName: "config-data") pod "503f6417-9273-4609-850f-64ce2e41caad" (UID: "503f6417-9273-4609-850f-64ce2e41caad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.734560 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.734616 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.734630 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5vtc\" (UniqueName: \"kubernetes.io/projected/503f6417-9273-4609-850f-64ce2e41caad-kube-api-access-p5vtc\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:08 crc kubenswrapper[4860]: I0121 21:37:08.734650 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/503f6417-9273-4609-850f-64ce2e41caad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.207685 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" event={"ID":"503f6417-9273-4609-850f-64ce2e41caad","Type":"ContainerDied","Data":"fc6bc1ee47a5b885b117a8d6c4a58df830f9d52179d598ece2221eab0739fd36"} Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.207741 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6bc1ee47a5b885b117a8d6c4a58df830f9d52179d598ece2221eab0739fd36" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.207761 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.526996 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: E0121 21:37:09.528133 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503f6417-9273-4609-850f-64ce2e41caad" containerName="watcher-kuttl-db-sync" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.528161 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="503f6417-9273-4609-850f-64ce2e41caad" containerName="watcher-kuttl-db-sync" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.528363 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="503f6417-9273-4609-850f-64ce2e41caad" containerName="watcher-kuttl-db-sync" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.537753 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.545962 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nnsqz" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.546570 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556483 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7qvm\" (UniqueName: \"kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556596 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556663 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556789 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.556843 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.573485 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.655055 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.656744 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.668656 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.668755 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.668896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.668958 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.669501 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.671209 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.672008 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7qvm\" (UniqueName: \"kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.676155 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.680247 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.680750 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.681020 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.683635 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.685413 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.715015 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7qvm\" (UniqueName: \"kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm\") pod \"watcher-kuttl-api-0\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.715157 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.716824 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.726316 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.755528 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxvl\" (UniqueName: \"kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777062 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777084 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777215 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777234 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777269 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777407 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8md\" (UniqueName: \"kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777445 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777486 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777511 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.777553 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879281 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxvl\" (UniqueName: \"kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879304 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879325 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879359 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879376 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879449 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8md\" (UniqueName: \"kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879533 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.879559 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.880157 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.888616 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.889232 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.898729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.902167 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.902445 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.909772 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.930342 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.930837 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.932956 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.951658 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxvl\" (UniqueName: \"kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl\") pod \"watcher-kuttl-applier-0\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:09 crc kubenswrapper[4860]: I0121 21:37:09.952877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8md\" (UniqueName: \"kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:10 crc kubenswrapper[4860]: I0121 21:37:10.068142 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:10 crc kubenswrapper[4860]: I0121 21:37:10.081450 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:10 crc kubenswrapper[4860]: W0121 21:37:10.428958 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b293d1_6fc7_48de_b863_fa7bd5cce92b.slice/crio-a27830b5fe37c8149937f9229d74bc6595d3b670eaeca5c5996228dcce9dc1a6 WatchSource:0}: Error finding container a27830b5fe37c8149937f9229d74bc6595d3b670eaeca5c5996228dcce9dc1a6: Status 404 returned error can't find the container with id a27830b5fe37c8149937f9229d74bc6595d3b670eaeca5c5996228dcce9dc1a6 Jan 21 21:37:10 crc kubenswrapper[4860]: I0121 21:37:10.432485 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:10 crc kubenswrapper[4860]: I0121 21:37:10.752710 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:10 crc kubenswrapper[4860]: W0121 21:37:10.752883 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7e6b59_e8c7_413d_9e8d_56f0fd0e3054.slice/crio-a5a12768be03f6b5a084c2ae4b236a42ac54b7064af84f5ef00dcef632d52fb4 WatchSource:0}: Error finding container a5a12768be03f6b5a084c2ae4b236a42ac54b7064af84f5ef00dcef632d52fb4: Status 404 returned error can't find the container with id a5a12768be03f6b5a084c2ae4b236a42ac54b7064af84f5ef00dcef632d52fb4 Jan 21 21:37:10 crc kubenswrapper[4860]: I0121 21:37:10.847789 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.257179 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054","Type":"ContainerStarted","Data":"cb885b864bfddf712126e4328bae845e3a70d6d5831c4b73842ecfd75edb5fa8"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.257639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054","Type":"ContainerStarted","Data":"a5a12768be03f6b5a084c2ae4b236a42ac54b7064af84f5ef00dcef632d52fb4"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.261139 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerStarted","Data":"9083d070cb87094164f83b3db47e259e777a29fbe641ac8eebf0e5fc5356eda8"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.261231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerStarted","Data":"d11176058093ed28bf7b68447f9ad9fb1dd5e9bb362f0666d4eb3712753fa882"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.261255 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerStarted","Data":"a27830b5fe37c8149937f9229d74bc6595d3b670eaeca5c5996228dcce9dc1a6"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.265525 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.270809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2a322f50-9a50-4939-88dc-28ef9f949539","Type":"ContainerStarted","Data":"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.270889 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2a322f50-9a50-4939-88dc-28ef9f949539","Type":"ContainerStarted","Data":"a8e3ec94b36688f35bc0bf14ddd03b40fbd03ae25e9d18d48840c7f414a5fc2a"} Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.292326 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.292285043 podStartE2EDuration="2.292285043s" podCreationTimestamp="2026-01-21 21:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:11.286317469 +0000 UTC m=+1723.508495959" watchObservedRunningTime="2026-01-21 21:37:11.292285043 +0000 UTC m=+1723.514463543" Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.322506 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.322481852 podStartE2EDuration="2.322481852s" podCreationTimestamp="2026-01-21 21:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:11.314244789 +0000 UTC m=+1723.536423279" watchObservedRunningTime="2026-01-21 21:37:11.322481852 +0000 UTC m=+1723.544660322" Jan 21 21:37:11 crc kubenswrapper[4860]: I0121 21:37:11.339892 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.339848037 podStartE2EDuration="2.339848037s" podCreationTimestamp="2026-01-21 21:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:11.334147992 +0000 UTC m=+1723.556326472" watchObservedRunningTime="2026-01-21 21:37:11.339848037 +0000 UTC m=+1723.562026507" Jan 21 21:37:13 crc kubenswrapper[4860]: I0121 21:37:13.291648 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:37:14 crc kubenswrapper[4860]: I0121 21:37:14.246175 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:14 crc kubenswrapper[4860]: I0121 21:37:14.903457 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:15 crc kubenswrapper[4860]: I0121 21:37:15.068951 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:16 crc kubenswrapper[4860]: I0121 21:37:16.579681 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:37:16 crc kubenswrapper[4860]: E0121 21:37:16.580576 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:37:19 crc kubenswrapper[4860]: I0121 21:37:19.903921 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:19 crc kubenswrapper[4860]: I0121 21:37:19.923798 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.069274 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.083290 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.095413 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.111506 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.388083 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.394379 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.437574 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:20 crc kubenswrapper[4860]: I0121 21:37:20.446960 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.780076 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.781042 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-central-agent" containerID="cri-o://59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8" gracePeriod=30 Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.781245 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" containerID="cri-o://9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31" gracePeriod=30 Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.781303 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="sg-core" containerID="cri-o://2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f" gracePeriod=30 Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.781362 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-notification-agent" containerID="cri-o://cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217" gracePeriod=30 Jan 21 21:37:22 crc kubenswrapper[4860]: I0121 21:37:22.794594 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:3000/\": EOF" Jan 21 21:37:23 crc kubenswrapper[4860]: E0121 21:37:23.275356 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5591e3_e2bd_40a9_b207_6fd48c26a725.slice/crio-conmon-9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5591e3_e2bd_40a9_b207_6fd48c26a725.slice/crio-9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:37:23 crc kubenswrapper[4860]: I0121 21:37:23.423255 4860 generic.go:334] "Generic (PLEG): container finished" podID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerID="9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31" exitCode=0 Jan 21 21:37:23 crc kubenswrapper[4860]: I0121 21:37:23.423321 4860 generic.go:334] "Generic (PLEG): container finished" podID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerID="2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f" exitCode=2 Jan 21 21:37:23 crc kubenswrapper[4860]: I0121 21:37:23.423293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerDied","Data":"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31"} Jan 21 21:37:23 crc kubenswrapper[4860]: I0121 21:37:23.423609 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerDied","Data":"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f"} Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.110753 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.132900 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tlbqg"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.183790 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher15cb-account-delete-ktbwr"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.185302 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.199298 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.199641 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" containerName="watcher-decision-engine" containerID="cri-o://cb885b864bfddf712126e4328bae845e3a70d6d5831c4b73842ecfd75edb5fa8" gracePeriod=30 Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.231897 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher15cb-account-delete-ktbwr"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.270651 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.271008 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" containerName="watcher-applier" containerID="cri-o://0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" gracePeriod=30 Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.337105 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.337511 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-kuttl-api-log" containerID="cri-o://d11176058093ed28bf7b68447f9ad9fb1dd5e9bb362f0666d4eb3712753fa882" gracePeriod=30 Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.337765 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-api" containerID="cri-o://9083d070cb87094164f83b3db47e259e777a29fbe641ac8eebf0e5fc5356eda8" gracePeriod=30 Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.353799 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.353954 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9n5x\" (UniqueName: \"kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.431993 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:3000/\": dial tcp 10.217.0.171:3000: connect: connection refused" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.437904 4860 generic.go:334] "Generic (PLEG): container finished" podID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerID="59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8" exitCode=0 Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.437978 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerDied","Data":"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8"} Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.457112 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.457238 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9n5x\" (UniqueName: \"kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.460191 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.493008 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9n5x\" (UniqueName: \"kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x\") pod \"watcher15cb-account-delete-ktbwr\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.515880 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.594303 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503f6417-9273-4609-850f-64ce2e41caad" path="/var/lib/kubelet/pods/503f6417-9273-4609-850f-64ce2e41caad/volumes" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.936351 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.978902 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.978983 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47ks\" (UniqueName: \"kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979031 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979062 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979138 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979170 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979205 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.979258 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd\") pod \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\" (UID: \"9c5591e3-e2bd-40a9-b207-6fd48c26a725\") " Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.980665 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.983099 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.986827 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks" (OuterVolumeSpecName: "kube-api-access-q47ks") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "kube-api-access-q47ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:24 crc kubenswrapper[4860]: I0121 21:37:24.987031 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts" (OuterVolumeSpecName: "scripts") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.051233 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.081222 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.081256 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.081267 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.081279 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c5591e3-e2bd-40a9-b207-6fd48c26a725-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.081289 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47ks\" (UniqueName: \"kubernetes.io/projected/9c5591e3-e2bd-40a9-b207-6fd48c26a725-kube-api-access-q47ks\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.086084 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.089175 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.098346 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.098608 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data" (OuterVolumeSpecName: "config-data") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.102356 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.102477 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" containerName="watcher-applier" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.122795 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c5591e3-e2bd-40a9-b207-6fd48c26a725" (UID: "9c5591e3-e2bd-40a9-b207-6fd48c26a725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:25 crc kubenswrapper[4860]: W0121 21:37:25.164151 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod101f9698_917f_42d4_8c2e_74909b6566cc.slice/crio-a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41 WatchSource:0}: Error finding container a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41: Status 404 returned error can't find the container with id a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41 Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.164837 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher15cb-account-delete-ktbwr"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.183671 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.183723 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.183739 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5591e3-e2bd-40a9-b207-6fd48c26a725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.453868 4860 generic.go:334] "Generic (PLEG): container finished" podID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerID="d11176058093ed28bf7b68447f9ad9fb1dd5e9bb362f0666d4eb3712753fa882" exitCode=143 Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.454095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerDied","Data":"d11176058093ed28bf7b68447f9ad9fb1dd5e9bb362f0666d4eb3712753fa882"} Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.459971 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" event={"ID":"101f9698-917f-42d4-8c2e-74909b6566cc","Type":"ContainerStarted","Data":"a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41"} Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.473812 4860 generic.go:334] "Generic (PLEG): container finished" podID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerID="cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217" exitCode=0 Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.473891 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerDied","Data":"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217"} Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.473958 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9c5591e3-e2bd-40a9-b207-6fd48c26a725","Type":"ContainerDied","Data":"25ec71c7d96ef918aa786b9b548682bccce5f41f81f0dfa4c2dbbe92e9069b04"} Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.473985 4860 scope.go:117] "RemoveContainer" containerID="9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.474203 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.523277 4860 scope.go:117] "RemoveContainer" containerID="2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.535635 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.544437 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.565455 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.565917 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="sg-core" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.565948 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="sg-core" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.565962 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-central-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.565968 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-central-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.565984 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-notification-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.565992 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-notification-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.566007 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.566013 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.566166 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="proxy-httpd" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.566184 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-central-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.566193 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="ceilometer-notification-agent" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.566204 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" containerName="sg-core" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.571802 4860 scope.go:117] "RemoveContainer" containerID="cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.581610 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.588834 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.588834 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.590142 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.597106 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.603263 4860 scope.go:117] "RemoveContainer" containerID="59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.648233 4860 scope.go:117] "RemoveContainer" containerID="9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.649743 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31\": container with ID starting with 9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31 not found: ID does not exist" containerID="9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.649820 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31"} err="failed to get container status \"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31\": rpc error: code = NotFound desc = could not find container \"9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31\": container with ID starting with 9460ff12249b2a5a1a6f93458c7d18134d00da20dc7f45189fbe9f14d600fc31 not found: ID does not exist" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.649923 4860 scope.go:117] "RemoveContainer" containerID="2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.655467 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f\": container with ID starting with 2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f not found: ID does not exist" containerID="2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.655532 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f"} err="failed to get container status \"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f\": rpc error: code = NotFound desc = could not find container \"2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f\": container with ID starting with 2e0b404f0b86f23759dcf58b4013e9a4e7ef439676666ce5cf994949013cd57f not found: ID does not exist" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.655558 4860 scope.go:117] "RemoveContainer" containerID="cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.656052 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217\": container with ID starting with cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217 not found: ID does not exist" containerID="cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.656079 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217"} err="failed to get container status \"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217\": rpc error: code = NotFound desc = could not find container \"cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217\": container with ID starting with cf7e67028379ee45ce24c8fa5e06f2e994093768c4c2a805ccf2c4798b89a217 not found: ID does not exist" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.656094 4860 scope.go:117] "RemoveContainer" containerID="59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8" Jan 21 21:37:25 crc kubenswrapper[4860]: E0121 21:37:25.656681 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8\": container with ID starting with 59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8 not found: ID does not exist" containerID="59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.656819 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8"} err="failed to get container status \"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8\": rpc error: code = NotFound desc = could not find container \"59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8\": container with ID starting with 59dd17a24a7ab710c0a891ccfdb1171b86fe8415edf49569e6804fa94675bca8 not found: ID does not exist" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712470 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712537 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712693 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxpf5\" (UniqueName: \"kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712750 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712833 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.712853 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.814698 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.814788 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.814878 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.814948 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxpf5\" (UniqueName: \"kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.815014 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.815037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.815070 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.815105 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.816130 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.816198 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.822074 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.823094 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.827195 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.827654 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.834695 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.836671 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxpf5\" (UniqueName: \"kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5\") pod \"ceilometer-0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:25 crc kubenswrapper[4860]: I0121 21:37:25.911629 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.239178 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.173:9322/\": read tcp 10.217.0.2:50476->10.217.0.173:9322: read: connection reset by peer" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.240096 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.173:9322/\": read tcp 10.217.0.2:50488->10.217.0.173:9322: read: connection reset by peer" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.508843 4860 generic.go:334] "Generic (PLEG): container finished" podID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerID="9083d070cb87094164f83b3db47e259e777a29fbe641ac8eebf0e5fc5356eda8" exitCode=0 Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.508989 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerDied","Data":"9083d070cb87094164f83b3db47e259e777a29fbe641ac8eebf0e5fc5356eda8"} Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.511039 4860 generic.go:334] "Generic (PLEG): container finished" podID="101f9698-917f-42d4-8c2e-74909b6566cc" containerID="c2052f7dbcf0ab1adecbbc288beb9075af7a81e075f332b7159a2c55cb03a091" exitCode=0 Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.511096 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" event={"ID":"101f9698-917f-42d4-8c2e-74909b6566cc","Type":"ContainerDied","Data":"c2052f7dbcf0ab1adecbbc288beb9075af7a81e075f332b7159a2c55cb03a091"} Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.600218 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5591e3-e2bd-40a9-b207-6fd48c26a725" path="/var/lib/kubelet/pods/9c5591e3-e2bd-40a9-b207-6fd48c26a725/volumes" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.719118 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.868753 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.950810 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.951049 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7qvm\" (UniqueName: \"kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.951129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.951189 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.951244 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.951266 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls\") pod \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\" (UID: \"c7b293d1-6fc7-48de-b863-fa7bd5cce92b\") " Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.952287 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs" (OuterVolumeSpecName: "logs") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.985905 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm" (OuterVolumeSpecName: "kube-api-access-f7qvm") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "kube-api-access-f7qvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:26 crc kubenswrapper[4860]: I0121 21:37:26.988982 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.007895 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data" (OuterVolumeSpecName: "config-data") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.028277 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.047173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c7b293d1-6fc7-48de-b863-fa7bd5cce92b" (UID: "c7b293d1-6fc7-48de-b863-fa7bd5cce92b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055111 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055165 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7qvm\" (UniqueName: \"kubernetes.io/projected/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-kube-api-access-f7qvm\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055178 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055191 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055203 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.055212 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c7b293d1-6fc7-48de-b863-fa7bd5cce92b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.522622 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7b293d1-6fc7-48de-b863-fa7bd5cce92b","Type":"ContainerDied","Data":"a27830b5fe37c8149937f9229d74bc6595d3b670eaeca5c5996228dcce9dc1a6"} Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.523105 4860 scope.go:117] "RemoveContainer" containerID="9083d070cb87094164f83b3db47e259e777a29fbe641ac8eebf0e5fc5356eda8" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.523249 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.536865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerStarted","Data":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.536915 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerStarted","Data":"88d0509df38a48432193c5c7f5dfde57e8a28d79d84852cf6284c46db057214e"} Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.582125 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.593188 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:27 crc kubenswrapper[4860]: I0121 21:37:27.596967 4860 scope.go:117] "RemoveContainer" containerID="d11176058093ed28bf7b68447f9ad9fb1dd5e9bb362f0666d4eb3712753fa882" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.066808 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.152971 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.161700 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.178877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts\") pod \"101f9698-917f-42d4-8c2e-74909b6566cc\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179013 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data\") pod \"2a322f50-9a50-4939-88dc-28ef9f949539\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179109 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9n5x\" (UniqueName: \"kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x\") pod \"101f9698-917f-42d4-8c2e-74909b6566cc\" (UID: \"101f9698-917f-42d4-8c2e-74909b6566cc\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179178 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs\") pod \"2a322f50-9a50-4939-88dc-28ef9f949539\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179262 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtxvl\" (UniqueName: \"kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl\") pod \"2a322f50-9a50-4939-88dc-28ef9f949539\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179297 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls\") pod \"2a322f50-9a50-4939-88dc-28ef9f949539\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.179342 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle\") pod \"2a322f50-9a50-4939-88dc-28ef9f949539\" (UID: \"2a322f50-9a50-4939-88dc-28ef9f949539\") " Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.180301 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "101f9698-917f-42d4-8c2e-74909b6566cc" (UID: "101f9698-917f-42d4-8c2e-74909b6566cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.180422 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs" (OuterVolumeSpecName: "logs") pod "2a322f50-9a50-4939-88dc-28ef9f949539" (UID: "2a322f50-9a50-4939-88dc-28ef9f949539"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.187053 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x" (OuterVolumeSpecName: "kube-api-access-v9n5x") pod "101f9698-917f-42d4-8c2e-74909b6566cc" (UID: "101f9698-917f-42d4-8c2e-74909b6566cc"). InnerVolumeSpecName "kube-api-access-v9n5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.192516 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl" (OuterVolumeSpecName: "kube-api-access-jtxvl") pod "2a322f50-9a50-4939-88dc-28ef9f949539" (UID: "2a322f50-9a50-4939-88dc-28ef9f949539"). InnerVolumeSpecName "kube-api-access-jtxvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.256333 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a322f50-9a50-4939-88dc-28ef9f949539" (UID: "2a322f50-9a50-4939-88dc-28ef9f949539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.284624 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/101f9698-917f-42d4-8c2e-74909b6566cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.284719 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9n5x\" (UniqueName: \"kubernetes.io/projected/101f9698-917f-42d4-8c2e-74909b6566cc-kube-api-access-v9n5x\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.284738 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a322f50-9a50-4939-88dc-28ef9f949539-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.284758 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtxvl\" (UniqueName: \"kubernetes.io/projected/2a322f50-9a50-4939-88dc-28ef9f949539-kube-api-access-jtxvl\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.284771 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.334745 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2a322f50-9a50-4939-88dc-28ef9f949539" (UID: "2a322f50-9a50-4939-88dc-28ef9f949539"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.360203 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data" (OuterVolumeSpecName: "config-data") pod "2a322f50-9a50-4939-88dc-28ef9f949539" (UID: "2a322f50-9a50-4939-88dc-28ef9f949539"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.387308 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.387363 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a322f50-9a50-4939-88dc-28ef9f949539-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.551536 4860 generic.go:334] "Generic (PLEG): container finished" podID="2a322f50-9a50-4939-88dc-28ef9f949539" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" exitCode=0 Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.551769 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.551864 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2a322f50-9a50-4939-88dc-28ef9f949539","Type":"ContainerDied","Data":"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07"} Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.553124 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2a322f50-9a50-4939-88dc-28ef9f949539","Type":"ContainerDied","Data":"a8e3ec94b36688f35bc0bf14ddd03b40fbd03ae25e9d18d48840c7f414a5fc2a"} Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.553197 4860 scope.go:117] "RemoveContainer" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.556801 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerStarted","Data":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.574383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" event={"ID":"101f9698-917f-42d4-8c2e-74909b6566cc","Type":"ContainerDied","Data":"a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41"} Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.574795 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c239cc6c819f7be792cdf7c14cdbd299c6c3b66368f5a4f1af089ff829aa41" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.574370 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher15cb-account-delete-ktbwr" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.591097 4860 scope.go:117] "RemoveContainer" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" Jan 21 21:37:28 crc kubenswrapper[4860]: E0121 21:37:28.594217 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07\": container with ID starting with 0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07 not found: ID does not exist" containerID="0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.594437 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07"} err="failed to get container status \"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07\": rpc error: code = NotFound desc = could not find container \"0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07\": container with ID starting with 0108a324561db413a1652de4e01b8b179c0788f733320f35b2951703e3ecac07 not found: ID does not exist" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.599495 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:37:28 crc kubenswrapper[4860]: E0121 21:37:28.599731 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.661414 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" path="/var/lib/kubelet/pods/c7b293d1-6fc7-48de-b863-fa7bd5cce92b/volumes" Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.713166 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:28 crc kubenswrapper[4860]: I0121 21:37:28.727051 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.628822 4860 generic.go:334] "Generic (PLEG): container finished" podID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" containerID="cb885b864bfddf712126e4328bae845e3a70d6d5831c4b73842ecfd75edb5fa8" exitCode=0 Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.629295 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054","Type":"ContainerDied","Data":"cb885b864bfddf712126e4328bae845e3a70d6d5831c4b73842ecfd75edb5fa8"} Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.643030 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerStarted","Data":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.760269 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.827924 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.828031 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.828102 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.828143 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.828298 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8md\" (UniqueName: \"kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.828398 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle\") pod \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\" (UID: \"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054\") " Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.830797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs" (OuterVolumeSpecName: "logs") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.865389 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md" (OuterVolumeSpecName: "kube-api-access-8w8md") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "kube-api-access-8w8md". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.879673 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.907556 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.916239 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data" (OuterVolumeSpecName: "config-data") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.930313 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.930675 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.930783 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.930866 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w8md\" (UniqueName: \"kubernetes.io/projected/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-kube-api-access-8w8md\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.930978 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:29 crc kubenswrapper[4860]: I0121 21:37:29.974442 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" (UID: "7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.033312 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.593798 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" path="/var/lib/kubelet/pods/2a322f50-9a50-4939-88dc-28ef9f949539/volumes" Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.663816 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerStarted","Data":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.668473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054","Type":"ContainerDied","Data":"a5a12768be03f6b5a084c2ae4b236a42ac54b7064af84f5ef00dcef632d52fb4"} Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.668994 4860 scope.go:117] "RemoveContainer" containerID="cb885b864bfddf712126e4328bae845e3a70d6d5831c4b73842ecfd75edb5fa8" Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.668569 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.717477 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:30 crc kubenswrapper[4860]: I0121 21:37:30.723655 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.681522 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-central-agent" containerID="cri-o://30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" gracePeriod=30 Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.681552 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="proxy-httpd" containerID="cri-o://4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" gracePeriod=30 Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.681654 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="sg-core" containerID="cri-o://daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" gracePeriod=30 Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.681693 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-notification-agent" containerID="cri-o://d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" gracePeriod=30 Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.681996 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:31 crc kubenswrapper[4860]: I0121 21:37:31.722473 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=3.034844756 podStartE2EDuration="6.722441512s" podCreationTimestamp="2026-01-21 21:37:25 +0000 UTC" firstStartedPulling="2026-01-21 21:37:26.712526853 +0000 UTC m=+1738.934705323" lastFinishedPulling="2026-01-21 21:37:30.400123589 +0000 UTC m=+1742.622302079" observedRunningTime="2026-01-21 21:37:31.716357324 +0000 UTC m=+1743.938535814" watchObservedRunningTime="2026-01-21 21:37:31.722441512 +0000 UTC m=+1743.944619982" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.491332 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585007 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxpf5\" (UniqueName: \"kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585188 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585228 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585324 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585416 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585480 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.585506 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml\") pod \"95c6f336-0111-4e92-baae-1c71a70320f0\" (UID: \"95c6f336-0111-4e92-baae-1c71a70320f0\") " Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.589066 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.589483 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.593675 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5" (OuterVolumeSpecName: "kube-api-access-xxpf5") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "kube-api-access-xxpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.594401 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts" (OuterVolumeSpecName: "scripts") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.595143 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" path="/var/lib/kubelet/pods/7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054/volumes" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.621925 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.657157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686768 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686810 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686823 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686834 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95c6f336-0111-4e92-baae-1c71a70320f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686846 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.686856 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxpf5\" (UniqueName: \"kubernetes.io/projected/95c6f336-0111-4e92-baae-1c71a70320f0-kube-api-access-xxpf5\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.693018 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701348 4860 generic.go:334] "Generic (PLEG): container finished" podID="95c6f336-0111-4e92-baae-1c71a70320f0" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" exitCode=0 Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701399 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerDied","Data":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701475 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerDied","Data":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701378 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701423 4860 generic.go:334] "Generic (PLEG): container finished" podID="95c6f336-0111-4e92-baae-1c71a70320f0" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" exitCode=2 Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701527 4860 scope.go:117] "RemoveContainer" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701522 4860 generic.go:334] "Generic (PLEG): container finished" podID="95c6f336-0111-4e92-baae-1c71a70320f0" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" exitCode=0 Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701549 4860 generic.go:334] "Generic (PLEG): container finished" podID="95c6f336-0111-4e92-baae-1c71a70320f0" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" exitCode=0 Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701578 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerDied","Data":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701611 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerDied","Data":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.701622 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95c6f336-0111-4e92-baae-1c71a70320f0","Type":"ContainerDied","Data":"88d0509df38a48432193c5c7f5dfde57e8a28d79d84852cf6284c46db057214e"} Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.739448 4860 scope.go:117] "RemoveContainer" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.739717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data" (OuterVolumeSpecName: "config-data") pod "95c6f336-0111-4e92-baae-1c71a70320f0" (UID: "95c6f336-0111-4e92-baae-1c71a70320f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.762997 4860 scope.go:117] "RemoveContainer" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.788833 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.788875 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c6f336-0111-4e92-baae-1c71a70320f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.793665 4860 scope.go:117] "RemoveContainer" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.863214 4860 scope.go:117] "RemoveContainer" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: E0121 21:37:32.863830 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": container with ID starting with 4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e not found: ID does not exist" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.863871 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} err="failed to get container status \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": rpc error: code = NotFound desc = could not find container \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": container with ID starting with 4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.863898 4860 scope.go:117] "RemoveContainer" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: E0121 21:37:32.864330 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": container with ID starting with daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14 not found: ID does not exist" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.864407 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} err="failed to get container status \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": rpc error: code = NotFound desc = could not find container \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": container with ID starting with daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.864455 4860 scope.go:117] "RemoveContainer" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: E0121 21:37:32.865204 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": container with ID starting with d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478 not found: ID does not exist" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.865237 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} err="failed to get container status \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": rpc error: code = NotFound desc = could not find container \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": container with ID starting with d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.865258 4860 scope.go:117] "RemoveContainer" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: E0121 21:37:32.866013 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": container with ID starting with 30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314 not found: ID does not exist" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866111 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} err="failed to get container status \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": rpc error: code = NotFound desc = could not find container \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": container with ID starting with 30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866156 4860 scope.go:117] "RemoveContainer" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866548 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} err="failed to get container status \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": rpc error: code = NotFound desc = could not find container \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": container with ID starting with 4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866580 4860 scope.go:117] "RemoveContainer" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866895 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} err="failed to get container status \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": rpc error: code = NotFound desc = could not find container \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": container with ID starting with daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.866941 4860 scope.go:117] "RemoveContainer" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867264 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} err="failed to get container status \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": rpc error: code = NotFound desc = could not find container \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": container with ID starting with d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867299 4860 scope.go:117] "RemoveContainer" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867601 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} err="failed to get container status \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": rpc error: code = NotFound desc = could not find container \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": container with ID starting with 30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867628 4860 scope.go:117] "RemoveContainer" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867907 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} err="failed to get container status \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": rpc error: code = NotFound desc = could not find container \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": container with ID starting with 4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.867946 4860 scope.go:117] "RemoveContainer" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.868250 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} err="failed to get container status \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": rpc error: code = NotFound desc = could not find container \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": container with ID starting with daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.868306 4860 scope.go:117] "RemoveContainer" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.868717 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} err="failed to get container status \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": rpc error: code = NotFound desc = could not find container \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": container with ID starting with d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.868754 4860 scope.go:117] "RemoveContainer" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.869131 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} err="failed to get container status \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": rpc error: code = NotFound desc = could not find container \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": container with ID starting with 30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.869162 4860 scope.go:117] "RemoveContainer" containerID="4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.869578 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e"} err="failed to get container status \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": rpc error: code = NotFound desc = could not find container \"4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e\": container with ID starting with 4e1d0408fb83c6224ee51c5deaf652c549962cd6c452b3e332015a636f4b2c0e not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.869607 4860 scope.go:117] "RemoveContainer" containerID="daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.870006 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14"} err="failed to get container status \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": rpc error: code = NotFound desc = could not find container \"daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14\": container with ID starting with daf4cd442cbc63c61fd2575b9e55f24bc3f62086b8195477d9462c3cb6851b14 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.870038 4860 scope.go:117] "RemoveContainer" containerID="d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.870333 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478"} err="failed to get container status \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": rpc error: code = NotFound desc = could not find container \"d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478\": container with ID starting with d0b1d9edde3e036abfb65af3b3ae70199544c19641fb88b658ea94236202e478 not found: ID does not exist" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.870360 4860 scope.go:117] "RemoveContainer" containerID="30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314" Jan 21 21:37:32 crc kubenswrapper[4860]: I0121 21:37:32.870614 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314"} err="failed to get container status \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": rpc error: code = NotFound desc = could not find container \"30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314\": container with ID starting with 30d294571d010e7db02352270cdc0cbd496d4c531a4b6045097ce99b7d250314 not found: ID does not exist" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.038975 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.046293 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.074300 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.074825 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-central-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.074858 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-central-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.074877 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101f9698-917f-42d4-8c2e-74909b6566cc" containerName="mariadb-account-delete" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.074887 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="101f9698-917f-42d4-8c2e-74909b6566cc" containerName="mariadb-account-delete" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.074897 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="proxy-httpd" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.074908 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="proxy-httpd" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.074925 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" containerName="watcher-applier" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.074951 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" containerName="watcher-applier" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.075035 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" containerName="watcher-decision-engine" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075102 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" containerName="watcher-decision-engine" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.075121 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-api" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075129 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-api" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.075139 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-kuttl-api-log" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075170 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-kuttl-api-log" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.075191 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="sg-core" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075198 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="sg-core" Jan 21 21:37:33 crc kubenswrapper[4860]: E0121 21:37:33.075214 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-notification-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075222 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-notification-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075427 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-api" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075452 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="proxy-httpd" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075467 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a322f50-9a50-4939-88dc-28ef9f949539" containerName="watcher-applier" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075479 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="101f9698-917f-42d4-8c2e-74909b6566cc" containerName="mariadb-account-delete" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075500 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-central-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075508 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="sg-core" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075519 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" containerName="ceilometer-notification-agent" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075527 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7e6b59-e8c7-413d-9e8d-56f0fd0e3054" containerName="watcher-decision-engine" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.075539 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b293d1-6fc7-48de-b863-fa7bd5cce92b" containerName="watcher-kuttl-api-log" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.079838 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.081751 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.083954 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.084829 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094354 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094424 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094466 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094535 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rp5x\" (UniqueName: \"kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094653 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.094743 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.103987 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.197471 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.197884 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.197965 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198038 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198105 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rp5x\" (UniqueName: \"kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198172 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198256 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198323 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.198825 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.204598 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.205880 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.214845 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.225072 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.225185 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.228796 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rp5x\" (UniqueName: \"kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x\") pod \"ceilometer-0\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.411714 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:33 crc kubenswrapper[4860]: I0121 21:37:33.954572 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:37:33 crc kubenswrapper[4860]: W0121 21:37:33.963566 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda953ea6f_ac47_4e84_9d3a_d48a50069a97.slice/crio-39ffb24f0918083b71a66aef1736935f271beda51258a0209f4bbd4a8faaaf03 WatchSource:0}: Error finding container 39ffb24f0918083b71a66aef1736935f271beda51258a0209f4bbd4a8faaaf03: Status 404 returned error can't find the container with id 39ffb24f0918083b71a66aef1736935f271beda51258a0209f4bbd4a8faaaf03 Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.268149 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6zs2l"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.278889 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6zs2l"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.291606 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher15cb-account-delete-ktbwr"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.298439 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-15cb-account-create-update-6blqv"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.304971 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-15cb-account-create-update-6blqv"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.311854 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher15cb-account-delete-ktbwr"] Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.597463 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101f9698-917f-42d4-8c2e-74909b6566cc" path="/var/lib/kubelet/pods/101f9698-917f-42d4-8c2e-74909b6566cc/volumes" Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.598986 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c6f336-0111-4e92-baae-1c71a70320f0" path="/var/lib/kubelet/pods/95c6f336-0111-4e92-baae-1c71a70320f0/volumes" Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.599907 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d178a458-cfdf-4958-b93d-d5618868c282" path="/var/lib/kubelet/pods/d178a458-cfdf-4958-b93d-d5618868c282/volumes" Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.601205 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb7cf6d-b0da-4a06-b15f-0aebc81e5861" path="/var/lib/kubelet/pods/feb7cf6d-b0da-4a06-b15f-0aebc81e5861/volumes" Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.770657 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerStarted","Data":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} Jan 21 21:37:34 crc kubenswrapper[4860]: I0121 21:37:34.770746 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerStarted","Data":"39ffb24f0918083b71a66aef1736935f271beda51258a0209f4bbd4a8faaaf03"} Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.782556 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-97wqz"] Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.785135 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerStarted","Data":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.785268 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.793764 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-97wqz"] Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.817914 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p"] Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.819717 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.829338 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p"] Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.834181 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.964366 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hdh\" (UniqueName: \"kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.964480 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.964572 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:35 crc kubenswrapper[4860]: I0121 21:37:35.964651 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmm9q\" (UniqueName: \"kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.066282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hdh\" (UniqueName: \"kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.066383 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.066461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.066526 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmm9q\" (UniqueName: \"kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.067778 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.067892 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.077184 4860 scope.go:117] "RemoveContainer" containerID="54294abff40347cc68e45f4a266bded0002980952cca7233863473b214adbc57" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.090004 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hdh\" (UniqueName: \"kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh\") pod \"watcher-ed96-account-create-update-4wj9p\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.111631 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmm9q\" (UniqueName: \"kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q\") pod \"watcher-db-create-97wqz\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.135693 4860 scope.go:117] "RemoveContainer" containerID="91bd7c218c4efb95cad7bf25d6f32ec21b4dae0bbfe76973cd1c24818130132b" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.159181 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.404785 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.827577 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerStarted","Data":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} Jan 21 21:37:36 crc kubenswrapper[4860]: I0121 21:37:36.908386 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p"] Jan 21 21:37:37 crc kubenswrapper[4860]: W0121 21:37:37.319295 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2829057d_3b80_46eb_af3b_b2132c283963.slice/crio-335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7 WatchSource:0}: Error finding container 335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7: Status 404 returned error can't find the container with id 335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7 Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.320241 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-97wqz"] Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.837515 4860 generic.go:334] "Generic (PLEG): container finished" podID="2829057d-3b80-46eb-af3b-b2132c283963" containerID="0ae997911b4b037d32b7b2e4f42d51c116bfaef8382c0f6446afd3181084e9f4" exitCode=0 Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.837702 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-97wqz" event={"ID":"2829057d-3b80-46eb-af3b-b2132c283963","Type":"ContainerDied","Data":"0ae997911b4b037d32b7b2e4f42d51c116bfaef8382c0f6446afd3181084e9f4"} Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.838111 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-97wqz" event={"ID":"2829057d-3b80-46eb-af3b-b2132c283963","Type":"ContainerStarted","Data":"335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7"} Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.841313 4860 generic.go:334] "Generic (PLEG): container finished" podID="f7292d6f-3ae7-456f-959a-58631b49ec0d" containerID="140a928455b671e1ad23d527064e7121e2bbe20c4b276eb550b740dfe6625f90" exitCode=0 Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.841409 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" event={"ID":"f7292d6f-3ae7-456f-959a-58631b49ec0d","Type":"ContainerDied","Data":"140a928455b671e1ad23d527064e7121e2bbe20c4b276eb550b740dfe6625f90"} Jan 21 21:37:37 crc kubenswrapper[4860]: I0121 21:37:37.841465 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" event={"ID":"f7292d6f-3ae7-456f-959a-58631b49ec0d","Type":"ContainerStarted","Data":"97de89bb2d63f2485bbe6780cb820b7d252d28fa97dd3517c500ee87a302274a"} Jan 21 21:37:38 crc kubenswrapper[4860]: I0121 21:37:38.854923 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerStarted","Data":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} Jan 21 21:37:38 crc kubenswrapper[4860]: I0121 21:37:38.908399 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.5128758169999998 podStartE2EDuration="5.908368704s" podCreationTimestamp="2026-01-21 21:37:33 +0000 UTC" firstStartedPulling="2026-01-21 21:37:33.967153421 +0000 UTC m=+1746.189331891" lastFinishedPulling="2026-01-21 21:37:38.362646318 +0000 UTC m=+1750.584824778" observedRunningTime="2026-01-21 21:37:38.895542779 +0000 UTC m=+1751.117721269" watchObservedRunningTime="2026-01-21 21:37:38.908368704 +0000 UTC m=+1751.130547174" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.311068 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.324233 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.365526 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts\") pod \"f7292d6f-3ae7-456f-959a-58631b49ec0d\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.365696 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts\") pod \"2829057d-3b80-46eb-af3b-b2132c283963\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.365791 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmm9q\" (UniqueName: \"kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q\") pod \"2829057d-3b80-46eb-af3b-b2132c283963\" (UID: \"2829057d-3b80-46eb-af3b-b2132c283963\") " Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.365871 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69hdh\" (UniqueName: \"kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh\") pod \"f7292d6f-3ae7-456f-959a-58631b49ec0d\" (UID: \"f7292d6f-3ae7-456f-959a-58631b49ec0d\") " Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.366325 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2829057d-3b80-46eb-af3b-b2132c283963" (UID: "2829057d-3b80-46eb-af3b-b2132c283963"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.367149 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7292d6f-3ae7-456f-959a-58631b49ec0d" (UID: "f7292d6f-3ae7-456f-959a-58631b49ec0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.374365 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q" (OuterVolumeSpecName: "kube-api-access-vmm9q") pod "2829057d-3b80-46eb-af3b-b2132c283963" (UID: "2829057d-3b80-46eb-af3b-b2132c283963"). InnerVolumeSpecName "kube-api-access-vmm9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.374455 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh" (OuterVolumeSpecName: "kube-api-access-69hdh") pod "f7292d6f-3ae7-456f-959a-58631b49ec0d" (UID: "f7292d6f-3ae7-456f-959a-58631b49ec0d"). InnerVolumeSpecName "kube-api-access-69hdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.468142 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7292d6f-3ae7-456f-959a-58631b49ec0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.468185 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2829057d-3b80-46eb-af3b-b2132c283963-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.468197 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmm9q\" (UniqueName: \"kubernetes.io/projected/2829057d-3b80-46eb-af3b-b2132c283963-kube-api-access-vmm9q\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.468210 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69hdh\" (UniqueName: \"kubernetes.io/projected/f7292d6f-3ae7-456f-959a-58631b49ec0d-kube-api-access-69hdh\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.871087 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.872997 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p" event={"ID":"f7292d6f-3ae7-456f-959a-58631b49ec0d","Type":"ContainerDied","Data":"97de89bb2d63f2485bbe6780cb820b7d252d28fa97dd3517c500ee87a302274a"} Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.873093 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97de89bb2d63f2485bbe6780cb820b7d252d28fa97dd3517c500ee87a302274a" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.878653 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-97wqz" event={"ID":"2829057d-3b80-46eb-af3b-b2132c283963","Type":"ContainerDied","Data":"335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7"} Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.878723 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-97wqz" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.878770 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="335149ee13f86c246f38bc7e3c833f5f6f9a8c5dcee0ddd700999a738e0dadf7" Jan 21 21:37:39 crc kubenswrapper[4860]: I0121 21:37:39.879521 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.267914 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-982rj"] Jan 21 21:37:41 crc kubenswrapper[4860]: E0121 21:37:41.273504 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7292d6f-3ae7-456f-959a-58631b49ec0d" containerName="mariadb-account-create-update" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.274787 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7292d6f-3ae7-456f-959a-58631b49ec0d" containerName="mariadb-account-create-update" Jan 21 21:37:41 crc kubenswrapper[4860]: E0121 21:37:41.274815 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2829057d-3b80-46eb-af3b-b2132c283963" containerName="mariadb-database-create" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.274824 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2829057d-3b80-46eb-af3b-b2132c283963" containerName="mariadb-database-create" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.275573 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7292d6f-3ae7-456f-959a-58631b49ec0d" containerName="mariadb-account-create-update" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.275627 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2829057d-3b80-46eb-af3b-b2132c283963" containerName="mariadb-database-create" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.277561 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.283132 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-57bm7" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.283356 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.283840 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-982rj"] Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.367226 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdxm\" (UniqueName: \"kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.367316 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.367363 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.367416 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.468759 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdxm\" (UniqueName: \"kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.468829 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.468869 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.468912 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.478826 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.483848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.491853 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.498598 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdxm\" (UniqueName: \"kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm\") pod \"watcher-kuttl-db-sync-982rj\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:41 crc kubenswrapper[4860]: I0121 21:37:41.618313 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:42 crc kubenswrapper[4860]: I0121 21:37:42.178630 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-982rj"] Jan 21 21:37:42 crc kubenswrapper[4860]: W0121 21:37:42.196998 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod943e71b2_4f7f_4746_8e43_ae9f9ddab819.slice/crio-3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd WatchSource:0}: Error finding container 3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd: Status 404 returned error can't find the container with id 3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd Jan 21 21:37:42 crc kubenswrapper[4860]: I0121 21:37:42.916543 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" event={"ID":"943e71b2-4f7f-4746-8e43-ae9f9ddab819","Type":"ContainerStarted","Data":"829ce9e97c11a141da2881c1ea310217ba8a78327d05367061cad0944597a7e5"} Jan 21 21:37:42 crc kubenswrapper[4860]: I0121 21:37:42.916603 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" event={"ID":"943e71b2-4f7f-4746-8e43-ae9f9ddab819","Type":"ContainerStarted","Data":"3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd"} Jan 21 21:37:42 crc kubenswrapper[4860]: I0121 21:37:42.937554 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" podStartSLOduration=1.937333593 podStartE2EDuration="1.937333593s" podCreationTimestamp="2026-01-21 21:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:42.934347931 +0000 UTC m=+1755.156526421" watchObservedRunningTime="2026-01-21 21:37:42.937333593 +0000 UTC m=+1755.159512073" Jan 21 21:37:43 crc kubenswrapper[4860]: I0121 21:37:43.580473 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:37:43 crc kubenswrapper[4860]: E0121 21:37:43.580777 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:37:45 crc kubenswrapper[4860]: I0121 21:37:45.964682 4860 generic.go:334] "Generic (PLEG): container finished" podID="943e71b2-4f7f-4746-8e43-ae9f9ddab819" containerID="829ce9e97c11a141da2881c1ea310217ba8a78327d05367061cad0944597a7e5" exitCode=0 Jan 21 21:37:45 crc kubenswrapper[4860]: I0121 21:37:45.964802 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" event={"ID":"943e71b2-4f7f-4746-8e43-ae9f9ddab819","Type":"ContainerDied","Data":"829ce9e97c11a141da2881c1ea310217ba8a78327d05367061cad0944597a7e5"} Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.314610 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.450051 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data\") pod \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.450191 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njdxm\" (UniqueName: \"kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm\") pod \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.450222 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle\") pod \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.450402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data\") pod \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\" (UID: \"943e71b2-4f7f-4746-8e43-ae9f9ddab819\") " Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.460030 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "943e71b2-4f7f-4746-8e43-ae9f9ddab819" (UID: "943e71b2-4f7f-4746-8e43-ae9f9ddab819"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.481592 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm" (OuterVolumeSpecName: "kube-api-access-njdxm") pod "943e71b2-4f7f-4746-8e43-ae9f9ddab819" (UID: "943e71b2-4f7f-4746-8e43-ae9f9ddab819"). InnerVolumeSpecName "kube-api-access-njdxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.494822 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "943e71b2-4f7f-4746-8e43-ae9f9ddab819" (UID: "943e71b2-4f7f-4746-8e43-ae9f9ddab819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.523843 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data" (OuterVolumeSpecName: "config-data") pod "943e71b2-4f7f-4746-8e43-ae9f9ddab819" (UID: "943e71b2-4f7f-4746-8e43-ae9f9ddab819"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.553587 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njdxm\" (UniqueName: \"kubernetes.io/projected/943e71b2-4f7f-4746-8e43-ae9f9ddab819-kube-api-access-njdxm\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.553661 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.553682 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.553700 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943e71b2-4f7f-4746-8e43-ae9f9ddab819-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.986103 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" event={"ID":"943e71b2-4f7f-4746-8e43-ae9f9ddab819","Type":"ContainerDied","Data":"3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd"} Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.986622 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec224e6dd0b0c48f85ebd98e4b23f4fcfb5129e4552232206f552688ef685fd" Jan 21 21:37:47 crc kubenswrapper[4860]: I0121 21:37:47.986250 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-982rj" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.284473 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: E0121 21:37:48.285090 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943e71b2-4f7f-4746-8e43-ae9f9ddab819" containerName="watcher-kuttl-db-sync" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.285121 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="943e71b2-4f7f-4746-8e43-ae9f9ddab819" containerName="watcher-kuttl-db-sync" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.285379 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="943e71b2-4f7f-4746-8e43-ae9f9ddab819" containerName="watcher-kuttl-db-sync" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.286713 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.289531 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-57bm7" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.289719 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.299762 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370220 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370378 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370439 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx7fz\" (UniqueName: \"kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.370550 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.408230 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.409710 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.412396 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.439003 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.472385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.472461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.472774 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.472960 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473050 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473115 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntv5r\" (UniqueName: \"kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473218 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473437 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx7fz\" (UniqueName: \"kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.473600 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.474321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.479925 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.481782 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.484662 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.485336 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.511652 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx7fz\" (UniqueName: \"kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz\") pod \"watcher-kuttl-api-0\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.526069 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.532628 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.537960 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.570547 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.577405 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.577593 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.577637 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntv5r\" (UniqueName: \"kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.577683 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.577712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.581030 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.583994 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.584210 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.602992 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.608336 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntv5r\" (UniqueName: \"kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r\") pod \"watcher-kuttl-applier-0\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.610730 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.679905 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.680071 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgm6g\" (UniqueName: \"kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.680118 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.680150 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.680190 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.680216 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.728785 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.781807 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.783077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.783148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.783186 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.783235 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.783475 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgm6g\" (UniqueName: \"kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.784628 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.791284 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.791461 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.792608 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.805780 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:48 crc kubenswrapper[4860]: I0121 21:37:48.807435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgm6g\" (UniqueName: \"kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:49 crc kubenswrapper[4860]: I0121 21:37:49.001348 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:49 crc kubenswrapper[4860]: W0121 21:37:49.153027 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01200251_c652_48cd_ac68_c422cd325f71.slice/crio-a06f063ee075c07b034eb06e1ab62aee166382400ed9ea9b1543346c91cf4ed3 WatchSource:0}: Error finding container a06f063ee075c07b034eb06e1ab62aee166382400ed9ea9b1543346c91cf4ed3: Status 404 returned error can't find the container with id a06f063ee075c07b034eb06e1ab62aee166382400ed9ea9b1543346c91cf4ed3 Jan 21 21:37:49 crc kubenswrapper[4860]: I0121 21:37:49.154206 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:37:49 crc kubenswrapper[4860]: I0121 21:37:49.310393 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:37:49 crc kubenswrapper[4860]: I0121 21:37:49.513223 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:37:49 crc kubenswrapper[4860]: W0121 21:37:49.523330 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7e6ba8_85c9_44f6_8cd8_fff802df95f6.slice/crio-38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab WatchSource:0}: Error finding container 38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab: Status 404 returned error can't find the container with id 38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.008128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b10a74e6-0097-4e91-9d5b-72169c3ffc36","Type":"ContainerStarted","Data":"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.008745 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b10a74e6-0097-4e91-9d5b-72169c3ffc36","Type":"ContainerStarted","Data":"68c7ea0d63112ae7f7cf93e1a4badb018ac773255e68806515eeb93ce73d3b00"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.011596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerStarted","Data":"dda1dd83abe1be5cd33a9e38a5602b6e3bae8ec487870a0db4416574e83a4965"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.011677 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerStarted","Data":"42807b3b9f4026ed8514be6715a097d7a897eab8c7d15bfef10fa01bc87822b0"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.011724 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.011738 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerStarted","Data":"a06f063ee075c07b034eb06e1ab62aee166382400ed9ea9b1543346c91cf4ed3"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.014032 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6","Type":"ContainerStarted","Data":"07efbfc894fb85132cbd1b08de9be0ff3681facacf231d8f3ac8c3b20673d43e"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.014102 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6","Type":"ContainerStarted","Data":"38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab"} Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.014210 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.182:9322/\": dial tcp 10.217.0.182:9322: connect: connection refused" Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.045458 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.045428918 podStartE2EDuration="2.045428918s" podCreationTimestamp="2026-01-21 21:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:50.035016358 +0000 UTC m=+1762.257194828" watchObservedRunningTime="2026-01-21 21:37:50.045428918 +0000 UTC m=+1762.267607388" Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.062651 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.062619857 podStartE2EDuration="2.062619857s" podCreationTimestamp="2026-01-21 21:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:50.060120751 +0000 UTC m=+1762.282299231" watchObservedRunningTime="2026-01-21 21:37:50.062619857 +0000 UTC m=+1762.284798327" Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.097864 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.097840572 podStartE2EDuration="2.097840572s" podCreationTimestamp="2026-01-21 21:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:37:50.089084132 +0000 UTC m=+1762.311262602" watchObservedRunningTime="2026-01-21 21:37:50.097840572 +0000 UTC m=+1762.320019032" Jan 21 21:37:50 crc kubenswrapper[4860]: I0121 21:37:50.135840 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:51 crc kubenswrapper[4860]: I0121 21:37:51.346413 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:52 crc kubenswrapper[4860]: I0121 21:37:52.584792 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:53 crc kubenswrapper[4860]: I0121 21:37:53.439585 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:53 crc kubenswrapper[4860]: I0121 21:37:53.612428 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:53 crc kubenswrapper[4860]: I0121 21:37:53.730067 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:53 crc kubenswrapper[4860]: I0121 21:37:53.844291 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:55 crc kubenswrapper[4860]: I0121 21:37:55.121337 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:56 crc kubenswrapper[4860]: I0121 21:37:56.396329 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:56 crc kubenswrapper[4860]: I0121 21:37:56.579221 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:37:56 crc kubenswrapper[4860]: E0121 21:37:56.579584 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:37:57 crc kubenswrapper[4860]: I0121 21:37:57.683777 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:58 crc kubenswrapper[4860]: I0121 21:37:58.612500 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:58 crc kubenswrapper[4860]: I0121 21:37:58.618898 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:58 crc kubenswrapper[4860]: I0121 21:37:58.730202 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:58 crc kubenswrapper[4860]: I0121 21:37:58.760956 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:58 crc kubenswrapper[4860]: I0121 21:37:58.957339 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.003426 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.032714 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.102204 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.112576 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.131596 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:37:59 crc kubenswrapper[4860]: I0121 21:37:59.172782 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:38:00 crc kubenswrapper[4860]: I0121 21:38:00.168754 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:00 crc kubenswrapper[4860]: I0121 21:38:00.486880 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:00 crc kubenswrapper[4860]: I0121 21:38:00.985949 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-nspmr"] Jan 21 21:38:00 crc kubenswrapper[4860]: I0121 21:38:00.988353 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:00 crc kubenswrapper[4860]: I0121 21:38:00.999211 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-nspmr"] Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.022261 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz"] Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.024263 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.037749 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.059802 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz"] Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.190273 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.190408 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.190555 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7cn\" (UniqueName: \"kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.192171 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksng\" (UniqueName: \"kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.294410 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ksng\" (UniqueName: \"kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.294512 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.294550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.294624 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7cn\" (UniqueName: \"kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.295695 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.295891 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.324970 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7cn\" (UniqueName: \"kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn\") pod \"cinder-19d1-account-create-update-ms5wz\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.328876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ksng\" (UniqueName: \"kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng\") pod \"cinder-db-create-nspmr\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.357582 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.624501 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:01 crc kubenswrapper[4860]: I0121 21:38:01.754123 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.052463 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz"] Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.060131 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.060556 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-central-agent" containerID="cri-o://7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" gracePeriod=30 Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.060736 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="proxy-httpd" containerID="cri-o://dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" gracePeriod=30 Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.060835 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="sg-core" containerID="cri-o://b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" gracePeriod=30 Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.060908 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-notification-agent" containerID="cri-o://21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" gracePeriod=30 Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.085167 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.178:3000/\": EOF" Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.163571 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" event={"ID":"639aa53e-95ce-499f-a6af-f6ffb3d07f31","Type":"ContainerStarted","Data":"ffe815b85da0d615b708f0502e70d7be36fcbaee97f7409aa6f0d119522a6e56"} Jan 21 21:38:02 crc kubenswrapper[4860]: I0121 21:38:02.923075 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-nspmr"] Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.018240 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.096913 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151139 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151214 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151275 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151415 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rp5x\" (UniqueName: \"kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151460 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151527 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151567 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.151660 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data\") pod \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\" (UID: \"a953ea6f-ac47-4e84-9d3a-d48a50069a97\") " Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.153778 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.154086 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.163372 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x" (OuterVolumeSpecName: "kube-api-access-6rp5x") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "kube-api-access-6rp5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.177915 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts" (OuterVolumeSpecName: "scripts") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.183817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-nspmr" event={"ID":"3250668b-0249-48e2-b1a7-def619c72d7c","Type":"ContainerStarted","Data":"889a4c6c103caa2eb44865a4bb0fefa12a0e569bda1b3122d46cfbb49ff37b97"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.194514 4860 generic.go:334] "Generic (PLEG): container finished" podID="639aa53e-95ce-499f-a6af-f6ffb3d07f31" containerID="43d62ccc3fb59822eae900a066691991ca32c84d9f5eff660bc9ea9bcc3f3fd0" exitCode=0 Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.194596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" event={"ID":"639aa53e-95ce-499f-a6af-f6ffb3d07f31","Type":"ContainerDied","Data":"43d62ccc3fb59822eae900a066691991ca32c84d9f5eff660bc9ea9bcc3f3fd0"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.211043 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217678 4860 generic.go:334] "Generic (PLEG): container finished" podID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" exitCode=0 Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217719 4860 generic.go:334] "Generic (PLEG): container finished" podID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" exitCode=2 Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217727 4860 generic.go:334] "Generic (PLEG): container finished" podID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" exitCode=0 Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217736 4860 generic.go:334] "Generic (PLEG): container finished" podID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" exitCode=0 Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217763 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerDied","Data":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217797 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerDied","Data":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217812 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerDied","Data":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217822 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerDied","Data":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217833 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a953ea6f-ac47-4e84-9d3a-d48a50069a97","Type":"ContainerDied","Data":"39ffb24f0918083b71a66aef1736935f271beda51258a0209f4bbd4a8faaaf03"} Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.217852 4860 scope.go:117] "RemoveContainer" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.218049 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.245339 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.246248 4860 scope.go:117] "RemoveContainer" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.256918 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.256988 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.257148 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.257167 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rp5x\" (UniqueName: \"kubernetes.io/projected/a953ea6f-ac47-4e84-9d3a-d48a50069a97-kube-api-access-6rp5x\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.257184 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a953ea6f-ac47-4e84-9d3a-d48a50069a97-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.257197 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.267909 4860 scope.go:117] "RemoveContainer" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.281900 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.287273 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data" (OuterVolumeSpecName: "config-data") pod "a953ea6f-ac47-4e84-9d3a-d48a50069a97" (UID: "a953ea6f-ac47-4e84-9d3a-d48a50069a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.293016 4860 scope.go:117] "RemoveContainer" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.319167 4860 scope.go:117] "RemoveContainer" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.319908 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": container with ID starting with dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e not found: ID does not exist" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.320016 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} err="failed to get container status \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": rpc error: code = NotFound desc = could not find container \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": container with ID starting with dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.320083 4860 scope.go:117] "RemoveContainer" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.320676 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": container with ID starting with b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759 not found: ID does not exist" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.320746 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} err="failed to get container status \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": rpc error: code = NotFound desc = could not find container \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": container with ID starting with b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.320797 4860 scope.go:117] "RemoveContainer" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.321554 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": container with ID starting with 21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e not found: ID does not exist" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.321611 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} err="failed to get container status \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": rpc error: code = NotFound desc = could not find container \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": container with ID starting with 21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.321633 4860 scope.go:117] "RemoveContainer" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.322126 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": container with ID starting with 7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5 not found: ID does not exist" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.322209 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} err="failed to get container status \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": rpc error: code = NotFound desc = could not find container \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": container with ID starting with 7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.322271 4860 scope.go:117] "RemoveContainer" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.323198 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} err="failed to get container status \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": rpc error: code = NotFound desc = could not find container \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": container with ID starting with dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.323263 4860 scope.go:117] "RemoveContainer" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.323667 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} err="failed to get container status \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": rpc error: code = NotFound desc = could not find container \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": container with ID starting with b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.323705 4860 scope.go:117] "RemoveContainer" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.324167 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} err="failed to get container status \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": rpc error: code = NotFound desc = could not find container \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": container with ID starting with 21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.324195 4860 scope.go:117] "RemoveContainer" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.324801 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} err="failed to get container status \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": rpc error: code = NotFound desc = could not find container \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": container with ID starting with 7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.324828 4860 scope.go:117] "RemoveContainer" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.325533 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} err="failed to get container status \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": rpc error: code = NotFound desc = could not find container \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": container with ID starting with dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.325568 4860 scope.go:117] "RemoveContainer" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.325982 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} err="failed to get container status \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": rpc error: code = NotFound desc = could not find container \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": container with ID starting with b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.326013 4860 scope.go:117] "RemoveContainer" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.326455 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} err="failed to get container status \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": rpc error: code = NotFound desc = could not find container \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": container with ID starting with 21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.326478 4860 scope.go:117] "RemoveContainer" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.326828 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} err="failed to get container status \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": rpc error: code = NotFound desc = could not find container \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": container with ID starting with 7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.326848 4860 scope.go:117] "RemoveContainer" containerID="dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.327220 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e"} err="failed to get container status \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": rpc error: code = NotFound desc = could not find container \"dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e\": container with ID starting with dcbeffa4ede73333cc5b6e4d40aa0560ea10ad76bcd9edd57bcaefffad9ab67e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.327250 4860 scope.go:117] "RemoveContainer" containerID="b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.327745 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759"} err="failed to get container status \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": rpc error: code = NotFound desc = could not find container \"b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759\": container with ID starting with b44fdd756d53e8de5416656058b69b493913aba763a8beb5eb8c87d03cd71759 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.327803 4860 scope.go:117] "RemoveContainer" containerID="21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.328157 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e"} err="failed to get container status \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": rpc error: code = NotFound desc = could not find container \"21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e\": container with ID starting with 21ab4d1514d5440ef54d840a0ac899a0d2793734f8a087b406c8d5d8cc3cb77e not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.328201 4860 scope.go:117] "RemoveContainer" containerID="7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.328576 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5"} err="failed to get container status \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": rpc error: code = NotFound desc = could not find container \"7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5\": container with ID starting with 7b0dcedd8cddde5cde90d1ac5494e80da9bf42f054ce7d51cdd49f5ad183ead5 not found: ID does not exist" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.361559 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.361625 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a953ea6f-ac47-4e84-9d3a-d48a50069a97-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.560861 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.570459 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.592405 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.593408 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-central-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593429 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-central-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.593442 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="sg-core" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593448 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="sg-core" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.593461 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="proxy-httpd" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593467 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="proxy-httpd" Jan 21 21:38:03 crc kubenswrapper[4860]: E0121 21:38:03.593509 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-notification-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593515 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-notification-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593673 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="sg-core" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593688 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-central-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593702 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="ceilometer-notification-agent" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.593710 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" containerName="proxy-httpd" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.595997 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.604297 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.604348 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.604624 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.620452 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.668482 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtskv\" (UniqueName: \"kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.668680 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.668782 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.668969 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.669063 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.669136 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.669221 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.669326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.771365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.771944 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772011 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772044 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772104 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772143 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.772178 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtskv\" (UniqueName: \"kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.774028 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.774331 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.782453 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.783053 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.783228 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.788633 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.793482 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtskv\" (UniqueName: \"kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.801685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:03 crc kubenswrapper[4860]: I0121 21:38:03.928657 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.233340 4860 generic.go:334] "Generic (PLEG): container finished" podID="3250668b-0249-48e2-b1a7-def619c72d7c" containerID="43d3af72f152f610f81572888c295590f15938acae3ba317e91a4edaf351e6a9" exitCode=0 Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.233436 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-nspmr" event={"ID":"3250668b-0249-48e2-b1a7-def619c72d7c","Type":"ContainerDied","Data":"43d3af72f152f610f81572888c295590f15938acae3ba317e91a4edaf351e6a9"} Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.325396 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.430097 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:38:04 crc kubenswrapper[4860]: W0121 21:38:04.449603 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2159c959_e321_407a_9b5e_9e7a7a137a16.slice/crio-d2bce750a7000a75ffe2e8767ee0a4f4db8ed36fe7f0932314204ea4815058cf WatchSource:0}: Error finding container d2bce750a7000a75ffe2e8767ee0a4f4db8ed36fe7f0932314204ea4815058cf: Status 404 returned error can't find the container with id d2bce750a7000a75ffe2e8767ee0a4f4db8ed36fe7f0932314204ea4815058cf Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.593973 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a953ea6f-ac47-4e84-9d3a-d48a50069a97" path="/var/lib/kubelet/pods/a953ea6f-ac47-4e84-9d3a-d48a50069a97/volumes" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.639356 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.698038 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7cn\" (UniqueName: \"kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn\") pod \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.698637 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts\") pod \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\" (UID: \"639aa53e-95ce-499f-a6af-f6ffb3d07f31\") " Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.699766 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "639aa53e-95ce-499f-a6af-f6ffb3d07f31" (UID: "639aa53e-95ce-499f-a6af-f6ffb3d07f31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.701681 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639aa53e-95ce-499f-a6af-f6ffb3d07f31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.704370 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn" (OuterVolumeSpecName: "kube-api-access-bh7cn") pod "639aa53e-95ce-499f-a6af-f6ffb3d07f31" (UID: "639aa53e-95ce-499f-a6af-f6ffb3d07f31"). InnerVolumeSpecName "kube-api-access-bh7cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:38:04 crc kubenswrapper[4860]: I0121 21:38:04.803501 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7cn\" (UniqueName: \"kubernetes.io/projected/639aa53e-95ce-499f-a6af-f6ffb3d07f31-kube-api-access-bh7cn\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.244217 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerStarted","Data":"66580bd3983b89d20d2f295e0a7b3b77b9d83de645c0bccaaf1c0bcc8bf1b145"} Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.244272 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerStarted","Data":"d2bce750a7000a75ffe2e8767ee0a4f4db8ed36fe7f0932314204ea4815058cf"} Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.245981 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" event={"ID":"639aa53e-95ce-499f-a6af-f6ffb3d07f31","Type":"ContainerDied","Data":"ffe815b85da0d615b708f0502e70d7be36fcbaee97f7409aa6f0d119522a6e56"} Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.246017 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe815b85da0d615b708f0502e70d7be36fcbaee97f7409aa6f0d119522a6e56" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.246054 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.556247 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.627167 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.728250 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ksng\" (UniqueName: \"kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng\") pod \"3250668b-0249-48e2-b1a7-def619c72d7c\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.728304 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts\") pod \"3250668b-0249-48e2-b1a7-def619c72d7c\" (UID: \"3250668b-0249-48e2-b1a7-def619c72d7c\") " Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.729522 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3250668b-0249-48e2-b1a7-def619c72d7c" (UID: "3250668b-0249-48e2-b1a7-def619c72d7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.735702 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng" (OuterVolumeSpecName: "kube-api-access-2ksng") pod "3250668b-0249-48e2-b1a7-def619c72d7c" (UID: "3250668b-0249-48e2-b1a7-def619c72d7c"). InnerVolumeSpecName "kube-api-access-2ksng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.831868 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ksng\" (UniqueName: \"kubernetes.io/projected/3250668b-0249-48e2-b1a7-def619c72d7c-kube-api-access-2ksng\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:05 crc kubenswrapper[4860]: I0121 21:38:05.832483 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3250668b-0249-48e2-b1a7-def619c72d7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:06 crc kubenswrapper[4860]: I0121 21:38:06.277552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerStarted","Data":"2c5500ecec7bfb6ab766d4cb3c9b089c01ca805ec2b0e7e3aa9b1ddf400f49e4"} Jan 21 21:38:06 crc kubenswrapper[4860]: I0121 21:38:06.280175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-nspmr" event={"ID":"3250668b-0249-48e2-b1a7-def619c72d7c","Type":"ContainerDied","Data":"889a4c6c103caa2eb44865a4bb0fefa12a0e569bda1b3122d46cfbb49ff37b97"} Jan 21 21:38:06 crc kubenswrapper[4860]: I0121 21:38:06.280228 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889a4c6c103caa2eb44865a4bb0fefa12a0e569bda1b3122d46cfbb49ff37b97" Jan 21 21:38:06 crc kubenswrapper[4860]: I0121 21:38:06.280312 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-nspmr" Jan 21 21:38:06 crc kubenswrapper[4860]: I0121 21:38:06.783260 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:07 crc kubenswrapper[4860]: I0121 21:38:07.292512 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerStarted","Data":"e20293595699bb46a4c4fb4d03ae645dc8aaf95cc1e594f048332bbc6818a197"} Jan 21 21:38:07 crc kubenswrapper[4860]: I0121 21:38:07.994490 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:08 crc kubenswrapper[4860]: I0121 21:38:08.305653 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerStarted","Data":"4fab2ecfebcf6ce1b7792b4677e12cb1d5b8e9e20dabbbe2f5eb8cc7ff7df311"} Jan 21 21:38:08 crc kubenswrapper[4860]: I0121 21:38:08.305883 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:08 crc kubenswrapper[4860]: I0121 21:38:08.591197 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:38:08 crc kubenswrapper[4860]: E0121 21:38:08.591875 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:38:09 crc kubenswrapper[4860]: I0121 21:38:09.189275 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:10 crc kubenswrapper[4860]: I0121 21:38:10.473192 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.225889 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=5.033723105 podStartE2EDuration="8.225856202s" podCreationTimestamp="2026-01-21 21:38:03 +0000 UTC" firstStartedPulling="2026-01-21 21:38:04.458497651 +0000 UTC m=+1776.680676121" lastFinishedPulling="2026-01-21 21:38:07.650630748 +0000 UTC m=+1779.872809218" observedRunningTime="2026-01-21 21:38:08.333367584 +0000 UTC m=+1780.555546054" watchObservedRunningTime="2026-01-21 21:38:11.225856202 +0000 UTC m=+1783.448034672" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.230329 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-nks5c"] Jan 21 21:38:11 crc kubenswrapper[4860]: E0121 21:38:11.230758 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639aa53e-95ce-499f-a6af-f6ffb3d07f31" containerName="mariadb-account-create-update" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.230779 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="639aa53e-95ce-499f-a6af-f6ffb3d07f31" containerName="mariadb-account-create-update" Jan 21 21:38:11 crc kubenswrapper[4860]: E0121 21:38:11.230820 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3250668b-0249-48e2-b1a7-def619c72d7c" containerName="mariadb-database-create" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.230829 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3250668b-0249-48e2-b1a7-def619c72d7c" containerName="mariadb-database-create" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.231021 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="639aa53e-95ce-499f-a6af-f6ffb3d07f31" containerName="mariadb-account-create-update" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.231042 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3250668b-0249-48e2-b1a7-def619c72d7c" containerName="mariadb-database-create" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.231767 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.235370 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.236024 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-vmxjj" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.238413 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.257033 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-nks5c"] Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.345759 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.345842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.345904 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.346019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.346049 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.346075 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwlh\" (UniqueName: \"kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.448669 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.448763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.448824 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzwlh\" (UniqueName: \"kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.448893 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.448979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.449043 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.450581 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.456536 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.456831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.462227 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.472960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzwlh\" (UniqueName: \"kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.477894 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts\") pod \"cinder-db-sync-nks5c\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.553826 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:11 crc kubenswrapper[4860]: I0121 21:38:11.774071 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:12 crc kubenswrapper[4860]: I0121 21:38:12.126216 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-nks5c"] Jan 21 21:38:12 crc kubenswrapper[4860]: I0121 21:38:12.348612 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-nks5c" event={"ID":"ba9197a5-7a88-494b-927d-5e3fc723d5e0","Type":"ContainerStarted","Data":"cdbbde5d9ec609de67994acc5aeca01275587940761ef5a88238f713f87f81c9"} Jan 21 21:38:13 crc kubenswrapper[4860]: I0121 21:38:13.053163 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:14 crc kubenswrapper[4860]: I0121 21:38:14.326492 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:15 crc kubenswrapper[4860]: I0121 21:38:15.611150 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:16 crc kubenswrapper[4860]: I0121 21:38:16.889265 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:18 crc kubenswrapper[4860]: I0121 21:38:18.164004 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:19 crc kubenswrapper[4860]: I0121 21:38:19.445156 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:19 crc kubenswrapper[4860]: I0121 21:38:19.579477 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:38:19 crc kubenswrapper[4860]: E0121 21:38:19.579789 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:38:20 crc kubenswrapper[4860]: I0121 21:38:20.752396 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:21 crc kubenswrapper[4860]: I0121 21:38:21.979292 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:23 crc kubenswrapper[4860]: I0121 21:38:23.222562 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:24 crc kubenswrapper[4860]: I0121 21:38:24.477750 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:25 crc kubenswrapper[4860]: I0121 21:38:25.782754 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:27 crc kubenswrapper[4860]: I0121 21:38:27.171474 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:28 crc kubenswrapper[4860]: I0121 21:38:28.441203 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:29 crc kubenswrapper[4860]: I0121 21:38:29.750464 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:31 crc kubenswrapper[4860]: I0121 21:38:31.010809 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:32 crc kubenswrapper[4860]: I0121 21:38:32.324453 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:33 crc kubenswrapper[4860]: I0121 21:38:33.569891 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:33 crc kubenswrapper[4860]: I0121 21:38:33.940638 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:38:34 crc kubenswrapper[4860]: I0121 21:38:34.583223 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:38:34 crc kubenswrapper[4860]: E0121 21:38:34.584352 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:38:34 crc kubenswrapper[4860]: I0121 21:38:34.840809 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:35 crc kubenswrapper[4860]: E0121 21:38:35.213373 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 21:38:35 crc kubenswrapper[4860]: E0121 21:38:35.213722 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nks5c_watcher-kuttl-default(ba9197a5-7a88-494b-927d-5e3fc723d5e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 21:38:35 crc kubenswrapper[4860]: E0121 21:38:35.215033 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/cinder-db-sync-nks5c" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" Jan 21 21:38:35 crc kubenswrapper[4860]: E0121 21:38:35.644412 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="watcher-kuttl-default/cinder-db-sync-nks5c" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" Jan 21 21:38:36 crc kubenswrapper[4860]: I0121 21:38:36.044413 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:37 crc kubenswrapper[4860]: I0121 21:38:37.340609 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:38 crc kubenswrapper[4860]: I0121 21:38:38.619189 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:39 crc kubenswrapper[4860]: I0121 21:38:39.884388 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:41 crc kubenswrapper[4860]: I0121 21:38:41.136572 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:42 crc kubenswrapper[4860]: I0121 21:38:42.368680 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:43 crc kubenswrapper[4860]: I0121 21:38:43.623030 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:44 crc kubenswrapper[4860]: I0121 21:38:44.875161 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:46 crc kubenswrapper[4860]: I0121 21:38:46.128306 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:46 crc kubenswrapper[4860]: I0121 21:38:46.579792 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:38:46 crc kubenswrapper[4860]: E0121 21:38:46.580579 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:38:47 crc kubenswrapper[4860]: I0121 21:38:47.387187 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:48 crc kubenswrapper[4860]: I0121 21:38:48.606608 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:49 crc kubenswrapper[4860]: I0121 21:38:49.789677 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-nks5c" event={"ID":"ba9197a5-7a88-494b-927d-5e3fc723d5e0","Type":"ContainerStarted","Data":"6398e7b23ddf4a00f8e28cd5e87ae34a0ceaa4c983ef9af321a2ac729545cd9d"} Jan 21 21:38:49 crc kubenswrapper[4860]: I0121 21:38:49.814022 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-nks5c" podStartSLOduration=2.890536081 podStartE2EDuration="38.813999116s" podCreationTimestamp="2026-01-21 21:38:11 +0000 UTC" firstStartedPulling="2026-01-21 21:38:12.120449503 +0000 UTC m=+1784.342627983" lastFinishedPulling="2026-01-21 21:38:48.043912548 +0000 UTC m=+1820.266091018" observedRunningTime="2026-01-21 21:38:49.813703986 +0000 UTC m=+1822.035882456" watchObservedRunningTime="2026-01-21 21:38:49.813999116 +0000 UTC m=+1822.036177586" Jan 21 21:38:49 crc kubenswrapper[4860]: I0121 21:38:49.833561 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:51 crc kubenswrapper[4860]: I0121 21:38:51.076774 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:52 crc kubenswrapper[4860]: I0121 21:38:52.296435 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:53 crc kubenswrapper[4860]: I0121 21:38:53.529336 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:53 crc kubenswrapper[4860]: I0121 21:38:53.833117 4860 generic.go:334] "Generic (PLEG): container finished" podID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" containerID="6398e7b23ddf4a00f8e28cd5e87ae34a0ceaa4c983ef9af321a2ac729545cd9d" exitCode=0 Jan 21 21:38:53 crc kubenswrapper[4860]: I0121 21:38:53.833176 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-nks5c" event={"ID":"ba9197a5-7a88-494b-927d-5e3fc723d5e0","Type":"ContainerDied","Data":"6398e7b23ddf4a00f8e28cd5e87ae34a0ceaa4c983ef9af321a2ac729545cd9d"} Jan 21 21:38:54 crc kubenswrapper[4860]: I0121 21:38:54.820351 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.222711 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.312667 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.312864 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzwlh\" (UniqueName: \"kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.312918 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.313113 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.313202 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.313322 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle\") pod \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\" (UID: \"ba9197a5-7a88-494b-927d-5e3fc723d5e0\") " Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.313872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.334250 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts" (OuterVolumeSpecName: "scripts") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.334997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.354516 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh" (OuterVolumeSpecName: "kube-api-access-dzwlh") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "kube-api-access-dzwlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.372902 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.398345 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data" (OuterVolumeSpecName: "config-data") pod "ba9197a5-7a88-494b-927d-5e3fc723d5e0" (UID: "ba9197a5-7a88-494b-927d-5e3fc723d5e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.415327 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba9197a5-7a88-494b-927d-5e3fc723d5e0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.416547 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.416781 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.416795 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.416808 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzwlh\" (UniqueName: \"kubernetes.io/projected/ba9197a5-7a88-494b-927d-5e3fc723d5e0-kube-api-access-dzwlh\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.416824 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9197a5-7a88-494b-927d-5e3fc723d5e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.856460 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-nks5c" event={"ID":"ba9197a5-7a88-494b-927d-5e3fc723d5e0","Type":"ContainerDied","Data":"cdbbde5d9ec609de67994acc5aeca01275587940761ef5a88238f713f87f81c9"} Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.857084 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdbbde5d9ec609de67994acc5aeca01275587940761ef5a88238f713f87f81c9" Jan 21 21:38:55 crc kubenswrapper[4860]: I0121 21:38:55.856889 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-nks5c" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.034200 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.238946 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: E0121 21:38:56.239428 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" containerName="cinder-db-sync" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.239451 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" containerName="cinder-db-sync" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.239640 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" containerName="cinder-db-sync" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.240849 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.265317 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.267142 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.273995 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-vmxjj" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.274276 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.276848 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.276862 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.277069 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.300015 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.321426 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.443385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.443453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.443498 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6qp\" (UniqueName: \"kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.445475 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522xg\" (UniqueName: \"kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.445550 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.445606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.445795 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.445981 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446101 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446147 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446194 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446302 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446534 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446640 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446748 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.446844 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447131 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447170 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447241 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.447388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.510527 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.512577 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.516447 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.533248 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522xg\" (UniqueName: \"kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549169 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549196 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549224 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549269 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549382 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549489 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549345 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549544 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549654 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549689 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549707 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549757 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549782 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549799 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549835 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549876 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549952 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.549987 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550013 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550170 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550217 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550334 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550356 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg6qp\" (UniqueName: \"kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550528 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.550597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.551094 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.560673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.560874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.564385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.564434 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.565842 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.566311 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.568487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.572582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.585511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.590409 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg6qp\" (UniqueName: \"kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp\") pod \"cinder-backup-0\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.597612 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.601351 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522xg\" (UniqueName: \"kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg\") pod \"cinder-scheduler-0\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.652361 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.654835 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.654982 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.655019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn7zx\" (UniqueName: \"kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.655051 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.655120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.655149 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.655201 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756430 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756546 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756605 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756635 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756723 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756762 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn7zx\" (UniqueName: \"kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.756787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.757277 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.757357 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.762007 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.764083 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.766622 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.769696 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.782511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn7zx\" (UniqueName: \"kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.782529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data\") pod \"cinder-api-0\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.829414 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.883753 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:38:56 crc kubenswrapper[4860]: I0121 21:38:56.906607 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.322863 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.588403 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:38:57 crc kubenswrapper[4860]: E0121 21:38:57.588925 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.615423 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.671380 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.837679 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.891302 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerStarted","Data":"1647ada149047c5cbd10e50fe7d56686ee25a35d2a3b76358728233690fd28ce"} Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.895121 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerStarted","Data":"e52eaf32f245130963a139e3e5d23e28df30f0fd36273bcca29be14a1141f852"} Jan 21 21:38:57 crc kubenswrapper[4860]: I0121 21:38:57.902534 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerStarted","Data":"20ce40073327da9d82f148478972385365d52340bbe1a16e429073dd27add7de"} Jan 21 21:38:58 crc kubenswrapper[4860]: I0121 21:38:58.621283 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:58 crc kubenswrapper[4860]: I0121 21:38:58.913896 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerStarted","Data":"4e7e11e153ee33a9dcb2bb16b32fd498cd31ca954d27ec122049110e182bc57d"} Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.003213 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.900019 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.924698 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerStarted","Data":"d8b4aef7e44b61bd6f15df66726f9bdaf3e361e9a43c1a27ef3598c38ac6eae5"} Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.926028 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.925910 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api" containerID="cri-o://d8b4aef7e44b61bd6f15df66726f9bdaf3e361e9a43c1a27ef3598c38ac6eae5" gracePeriod=30 Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.925304 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api-log" containerID="cri-o://4e7e11e153ee33a9dcb2bb16b32fd498cd31ca954d27ec122049110e182bc57d" gracePeriod=30 Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.945490 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerStarted","Data":"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4"} Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.945828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerStarted","Data":"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808"} Jan 21 21:38:59 crc kubenswrapper[4860]: I0121 21:38:59.973407 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.973382703 podStartE2EDuration="3.973382703s" podCreationTimestamp="2026-01-21 21:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:38:59.970262266 +0000 UTC m=+1832.192440746" watchObservedRunningTime="2026-01-21 21:38:59.973382703 +0000 UTC m=+1832.195561163" Jan 21 21:39:00 crc kubenswrapper[4860]: I0121 21:39:00.027546 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=3.043740624 podStartE2EDuration="4.027516079s" podCreationTimestamp="2026-01-21 21:38:56 +0000 UTC" firstStartedPulling="2026-01-21 21:38:57.525792272 +0000 UTC m=+1829.747970742" lastFinishedPulling="2026-01-21 21:38:58.509567727 +0000 UTC m=+1830.731746197" observedRunningTime="2026-01-21 21:39:00.023548256 +0000 UTC m=+1832.245726716" watchObservedRunningTime="2026-01-21 21:39:00.027516079 +0000 UTC m=+1832.249694549" Jan 21 21:39:01 crc kubenswrapper[4860]: I0121 21:39:01.000973 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerStarted","Data":"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb"} Jan 21 21:39:01 crc kubenswrapper[4860]: I0121 21:39:01.010506 4860 generic.go:334] "Generic (PLEG): container finished" podID="80181717-d115-418a-b9be-d17cc852e9ec" containerID="4e7e11e153ee33a9dcb2bb16b32fd498cd31ca954d27ec122049110e182bc57d" exitCode=143 Jan 21 21:39:01 crc kubenswrapper[4860]: I0121 21:39:01.011102 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerDied","Data":"4e7e11e153ee33a9dcb2bb16b32fd498cd31ca954d27ec122049110e182bc57d"} Jan 21 21:39:01 crc kubenswrapper[4860]: I0121 21:39:01.117245 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:01 crc kubenswrapper[4860]: I0121 21:39:01.884249 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:02 crc kubenswrapper[4860]: I0121 21:39:02.025612 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerStarted","Data":"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978"} Jan 21 21:39:02 crc kubenswrapper[4860]: I0121 21:39:02.050469 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=4.981029487 podStartE2EDuration="6.050446623s" podCreationTimestamp="2026-01-21 21:38:56 +0000 UTC" firstStartedPulling="2026-01-21 21:38:57.847764629 +0000 UTC m=+1830.069943099" lastFinishedPulling="2026-01-21 21:38:58.917181765 +0000 UTC m=+1831.139360235" observedRunningTime="2026-01-21 21:39:02.046248842 +0000 UTC m=+1834.268427312" watchObservedRunningTime="2026-01-21 21:39:02.050446623 +0000 UTC m=+1834.272625093" Jan 21 21:39:02 crc kubenswrapper[4860]: I0121 21:39:02.318343 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:03 crc kubenswrapper[4860]: I0121 21:39:03.527417 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:04 crc kubenswrapper[4860]: I0121 21:39:04.836237 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:06 crc kubenswrapper[4860]: I0121 21:39:06.110903 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:06 crc kubenswrapper[4860]: I0121 21:39:06.908023 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:07 crc kubenswrapper[4860]: I0121 21:39:07.136352 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:07 crc kubenswrapper[4860]: I0121 21:39:07.155313 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:07 crc kubenswrapper[4860]: I0121 21:39:07.194854 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:07 crc kubenswrapper[4860]: I0121 21:39:07.224216 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:07 crc kubenswrapper[4860]: I0121 21:39:07.374509 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.117560 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="cinder-scheduler" containerID="cri-o://adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb" gracePeriod=30 Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.117637 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="probe" containerID="cri-o://9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978" gracePeriod=30 Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.117871 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="probe" containerID="cri-o://0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" gracePeriod=30 Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.117854 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="cinder-backup" containerID="cri-o://7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" gracePeriod=30 Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.351598 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.351995 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" containerName="watcher-decision-engine" containerID="cri-o://07efbfc894fb85132cbd1b08de9be0ff3681facacf231d8f3ac8c3b20673d43e" gracePeriod=30 Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.596869 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:08 crc kubenswrapper[4860]: I0121 21:39:08.598422 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:39:08 crc kubenswrapper[4860]: E0121 21:39:08.598812 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:39:09 crc kubenswrapper[4860]: I0121 21:39:09.146295 4860 generic.go:334] "Generic (PLEG): container finished" podID="92419e32-d07a-4c59-8dd7-228521d212ef" containerID="9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978" exitCode=0 Jan 21 21:39:09 crc kubenswrapper[4860]: I0121 21:39:09.146796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerDied","Data":"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978"} Jan 21 21:39:09 crc kubenswrapper[4860]: I0121 21:39:09.435535 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:39:09 crc kubenswrapper[4860]: I0121 21:39:09.949234 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:09 crc kubenswrapper[4860]: I0121 21:39:09.987835 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038420 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038481 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038516 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038650 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522xg\" (UniqueName: \"kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038759 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038815 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.038888 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom\") pod \"92419e32-d07a-4c59-8dd7-228521d212ef\" (UID: \"92419e32-d07a-4c59-8dd7-228521d212ef\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.045363 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.055248 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts" (OuterVolumeSpecName: "scripts") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.056815 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg" (OuterVolumeSpecName: "kube-api-access-522xg") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "kube-api-access-522xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.063992 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.095687 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.143913 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144010 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144095 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144138 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144269 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144306 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144351 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev" (OuterVolumeSpecName: "dev") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144361 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.144469 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145692 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145867 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145917 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg6qp\" (UniqueName: \"kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145962 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.145999 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.146040 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.146079 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.146164 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls\") pod \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\" (UID: \"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b\") " Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.147796 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522xg\" (UniqueName: \"kubernetes.io/projected/92419e32-d07a-4c59-8dd7-228521d212ef-kube-api-access-522xg\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148400 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148421 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148431 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148442 4860 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148466 4860 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148483 4860 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-dev\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.148492 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92419e32-d07a-4c59-8dd7-228521d212ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149289 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149407 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys" (OuterVolumeSpecName: "sys") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149542 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149691 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149777 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.149943 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run" (OuterVolumeSpecName: "run") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.153380 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.157059 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp" (OuterVolumeSpecName: "kube-api-access-hg6qp") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "kube-api-access-hg6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.167410 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts" (OuterVolumeSpecName: "scripts") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.205135 4860 generic.go:334] "Generic (PLEG): container finished" podID="92419e32-d07a-4c59-8dd7-228521d212ef" containerID="adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb" exitCode=0 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.205306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerDied","Data":"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb"} Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.205347 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"92419e32-d07a-4c59-8dd7-228521d212ef","Type":"ContainerDied","Data":"1647ada149047c5cbd10e50fe7d56686ee25a35d2a3b76358728233690fd28ce"} Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.205371 4860 scope.go:117] "RemoveContainer" containerID="9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.205733 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.207404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.211909 4860 generic.go:334] "Generic (PLEG): container finished" podID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerID="0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" exitCode=0 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.211955 4860 generic.go:334] "Generic (PLEG): container finished" podID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerID="7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" exitCode=0 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.211981 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerDied","Data":"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4"} Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.212015 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerDied","Data":"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808"} Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.212031 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"fbc841c8-8aec-4d00-bcf9-3de80ac2d67b","Type":"ContainerDied","Data":"e52eaf32f245130963a139e3e5d23e28df30f0fd36273bcca29be14a1141f852"} Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.212111 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.232859 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data" (OuterVolumeSpecName: "config-data") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.238273 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.239426 4860 scope.go:117] "RemoveContainer" containerID="adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250403 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250452 4860 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250464 4860 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250478 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250492 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250506 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250520 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg6qp\" (UniqueName: \"kubernetes.io/projected/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-kube-api-access-hg6qp\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250531 4860 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-sys\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250544 4860 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250560 4860 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-run\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250573 4860 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.250583 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.253970 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "92419e32-d07a-4c59-8dd7-228521d212ef" (UID: "92419e32-d07a-4c59-8dd7-228521d212ef"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.260021 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data" (OuterVolumeSpecName: "config-data") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.280966 4860 scope.go:117] "RemoveContainer" containerID="9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.281880 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978\": container with ID starting with 9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978 not found: ID does not exist" containerID="9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.281984 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978"} err="failed to get container status \"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978\": rpc error: code = NotFound desc = could not find container \"9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978\": container with ID starting with 9b0c4169b70a09c839701117e23313dc098a5c484bdbd832c34a7f759edf4978 not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.282030 4860 scope.go:117] "RemoveContainer" containerID="adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.282757 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb\": container with ID starting with adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb not found: ID does not exist" containerID="adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.282783 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb"} err="failed to get container status \"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb\": rpc error: code = NotFound desc = could not find container \"adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb\": container with ID starting with adc377b91b5ecf8fdbd94fbab04b193b87c54d964abad6c43630eb85939d7fcb not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.282797 4860 scope.go:117] "RemoveContainer" containerID="0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.325257 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.325694 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-central-agent" containerID="cri-o://66580bd3983b89d20d2f295e0a7b3b77b9d83de645c0bccaaf1c0bcc8bf1b145" gracePeriod=30 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.325911 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="proxy-httpd" containerID="cri-o://4fab2ecfebcf6ce1b7792b4677e12cb1d5b8e9e20dabbbe2f5eb8cc7ff7df311" gracePeriod=30 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.325999 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="sg-core" containerID="cri-o://e20293595699bb46a4c4fb4d03ae645dc8aaf95cc1e594f048332bbc6818a197" gracePeriod=30 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.326099 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-notification-agent" containerID="cri-o://2c5500ecec7bfb6ab766d4cb3c9b089c01ca805ec2b0e7e3aa9b1ddf400f49e4" gracePeriod=30 Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.330540 4860 scope.go:117] "RemoveContainer" containerID="7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.353804 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.353840 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/92419e32-d07a-4c59-8dd7-228521d212ef-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.376978 4860 scope.go:117] "RemoveContainer" containerID="0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.378160 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4\": container with ID starting with 0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4 not found: ID does not exist" containerID="0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.378222 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4"} err="failed to get container status \"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4\": rpc error: code = NotFound desc = could not find container \"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4\": container with ID starting with 0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4 not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.378261 4860 scope.go:117] "RemoveContainer" containerID="7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.378729 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808\": container with ID starting with 7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808 not found: ID does not exist" containerID="7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.378788 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808"} err="failed to get container status \"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808\": rpc error: code = NotFound desc = could not find container \"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808\": container with ID starting with 7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808 not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.378832 4860 scope.go:117] "RemoveContainer" containerID="0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.379384 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4"} err="failed to get container status \"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4\": rpc error: code = NotFound desc = could not find container \"0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4\": container with ID starting with 0cb84916709efed78ae633a22de9ab5d25dfdcad8fe1ad813c79f1d540935df4 not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.379419 4860 scope.go:117] "RemoveContainer" containerID="7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.379546 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" (UID: "fbc841c8-8aec-4d00-bcf9-3de80ac2d67b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.380022 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808"} err="failed to get container status \"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808\": rpc error: code = NotFound desc = could not find container \"7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808\": container with ID starting with 7c30c1ca4c5fdcd9bf3ee58912e7d19e969400524c4937c719688bcee56fc808 not found: ID does not exist" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.456109 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.561220 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.575065 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.592277 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" path="/var/lib/kubelet/pods/fbc841c8-8aec-4d00-bcf9-3de80ac2d67b/volumes" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.594224 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.594269 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.604963 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.605595 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605617 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.605649 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="cinder-backup" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605658 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="cinder-backup" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.605681 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605688 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: E0121 21:39:10.605705 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="cinder-scheduler" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605712 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="cinder-scheduler" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605920 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="cinder-scheduler" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605964 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605973 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="probe" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.605988 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc841c8-8aec-4d00-bcf9-3de80ac2d67b" containerName="cinder-backup" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.607554 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.613638 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.621640 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.623698 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.632406 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.635695 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.649391 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664572 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664666 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr7b\" (UniqueName: \"kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664729 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664761 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664848 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664871 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664901 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664947 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.664975 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflfn\" (UniqueName: \"kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665039 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665105 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665129 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665205 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665239 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665293 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665349 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.665406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.766815 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.766875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.766909 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.766957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767003 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767022 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767046 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767095 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnr7b\" (UniqueName: \"kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767120 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767142 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767187 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767209 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767230 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767308 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflfn\" (UniqueName: \"kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767327 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767357 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767392 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767415 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.767543 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768180 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768484 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768648 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768737 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.768805 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.769135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.769893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.774275 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778058 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778344 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778561 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778560 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.778826 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.788416 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.790690 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.792192 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.792200 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnr7b\" (UniqueName: \"kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b\") pod \"cinder-backup-0\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.792270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflfn\" (UniqueName: \"kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn\") pod \"cinder-scheduler-0\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.942533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:10 crc kubenswrapper[4860]: I0121 21:39:10.964678 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.218384 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.230970 4860 generic.go:334] "Generic (PLEG): container finished" podID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerID="4fab2ecfebcf6ce1b7792b4677e12cb1d5b8e9e20dabbbe2f5eb8cc7ff7df311" exitCode=0 Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.231023 4860 generic.go:334] "Generic (PLEG): container finished" podID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerID="e20293595699bb46a4c4fb4d03ae645dc8aaf95cc1e594f048332bbc6818a197" exitCode=2 Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.231032 4860 generic.go:334] "Generic (PLEG): container finished" podID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerID="66580bd3983b89d20d2f295e0a7b3b77b9d83de645c0bccaaf1c0bcc8bf1b145" exitCode=0 Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.231099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerDied","Data":"4fab2ecfebcf6ce1b7792b4677e12cb1d5b8e9e20dabbbe2f5eb8cc7ff7df311"} Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.231140 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerDied","Data":"e20293595699bb46a4c4fb4d03ae645dc8aaf95cc1e594f048332bbc6818a197"} Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.231157 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerDied","Data":"66580bd3983b89d20d2f295e0a7b3b77b9d83de645c0bccaaf1c0bcc8bf1b145"} Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.617886 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:11 crc kubenswrapper[4860]: I0121 21:39:11.729076 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.276421 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerStarted","Data":"fae2d8fe59ebe4f61d7317868185c325a868f0f3981a870d55bd4f25b1b35519"} Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.277002 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerStarted","Data":"07ae7ceeda909c5127abdb8f6d33484fb13c99e6904bdcb255286bbb928af1d1"} Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.277017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerStarted","Data":"4d0249a9c2b0b7612124e9d0282313f6d68775e73a646f4b86fa520057f5921c"} Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.284987 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerStarted","Data":"359ecdb8783fc18c38627ece7b1471a642737c314e1315a3dc0fdfa802e47259"} Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.320362 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.32032679 podStartE2EDuration="2.32032679s" podCreationTimestamp="2026-01-21 21:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:39:12.310800134 +0000 UTC m=+1844.532978604" watchObservedRunningTime="2026-01-21 21:39:12.32032679 +0000 UTC m=+1844.542505260" Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.522801 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:12 crc kubenswrapper[4860]: I0121 21:39:12.595576 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92419e32-d07a-4c59-8dd7-228521d212ef" path="/var/lib/kubelet/pods/92419e32-d07a-4c59-8dd7-228521d212ef/volumes" Jan 21 21:39:13 crc kubenswrapper[4860]: I0121 21:39:13.304537 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerStarted","Data":"1ad27740ed618831be7a49f0315efe721a1ca108458b6ad711631f3c16c448d4"} Jan 21 21:39:13 crc kubenswrapper[4860]: I0121 21:39:13.762666 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.318468 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerStarted","Data":"df3df166931722411e4178d2d62ddca1b5a9e8f959c20f37923a984885d1918b"} Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.322719 4860 generic.go:334] "Generic (PLEG): container finished" podID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" containerID="07efbfc894fb85132cbd1b08de9be0ff3681facacf231d8f3ac8c3b20673d43e" exitCode=0 Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.322770 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6","Type":"ContainerDied","Data":"07efbfc894fb85132cbd1b08de9be0ff3681facacf231d8f3ac8c3b20673d43e"} Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.322817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6","Type":"ContainerDied","Data":"38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab"} Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.322836 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e5cd6caa2866671fe7e19e8544787a7fdf8d9a7b530582668a221b6b22d5ab" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.359105 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=4.359075733 podStartE2EDuration="4.359075733s" podCreationTimestamp="2026-01-21 21:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:39:14.355473592 +0000 UTC m=+1846.577652092" watchObservedRunningTime="2026-01-21 21:39:14.359075733 +0000 UTC m=+1846.581254203" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.382718 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.462877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.462980 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.463030 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.463062 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.463145 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.463206 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgm6g\" (UniqueName: \"kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g\") pod \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\" (UID: \"5d7e6ba8-85c9-44f6-8cd8-fff802df95f6\") " Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.465337 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs" (OuterVolumeSpecName: "logs") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.479653 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g" (OuterVolumeSpecName: "kube-api-access-zgm6g") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "kube-api-access-zgm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.505406 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.518206 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.557143 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data" (OuterVolumeSpecName: "config-data") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.565432 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" (UID: "5d7e6ba8-85c9-44f6-8cd8-fff802df95f6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566520 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566568 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566577 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566588 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566598 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:14 crc kubenswrapper[4860]: I0121 21:39:14.566609 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgm6g\" (UniqueName: \"kubernetes.io/projected/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6-kube-api-access-zgm6g\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.000584 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/watcher-decision-engine/0.log" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.343017 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.370589 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.384673 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.415627 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:15 crc kubenswrapper[4860]: E0121 21:39:15.416281 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" containerName="watcher-decision-engine" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.416311 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" containerName="watcher-decision-engine" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.416617 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" containerName="watcher-decision-engine" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.417642 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.420393 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.426704 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.585521 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.585573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.585610 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.585707 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.586043 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7pkp\" (UniqueName: \"kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.586146 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.688784 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7pkp\" (UniqueName: \"kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.688859 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.688986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.689018 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.689054 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.689319 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.690111 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.698763 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.698804 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.698950 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.713306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7pkp\" (UniqueName: \"kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.713348 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.735117 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.942742 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:15 crc kubenswrapper[4860]: I0121 21:39:15.965328 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:16 crc kubenswrapper[4860]: I0121 21:39:16.453973 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:16 crc kubenswrapper[4860]: I0121 21:39:16.595305 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7e6ba8-85c9-44f6-8cd8-fff802df95f6" path="/var/lib/kubelet/pods/5d7e6ba8-85c9-44f6-8cd8-fff802df95f6/volumes" Jan 21 21:39:17 crc kubenswrapper[4860]: I0121 21:39:17.365637 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3d34481-e759-4c8e-a1a9-43b9ee574f6c","Type":"ContainerStarted","Data":"c6dfec913e0fd05d02c8944f36cf8269222ccd77ed0fb84830b2eb749600c35c"} Jan 21 21:39:17 crc kubenswrapper[4860]: I0121 21:39:17.366229 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3d34481-e759-4c8e-a1a9-43b9ee574f6c","Type":"ContainerStarted","Data":"2305756bd885de1f06b96687d12f462395d186a7b464fa1d11d2d8338ac3ae8d"} Jan 21 21:39:17 crc kubenswrapper[4860]: I0121 21:39:17.390878 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.390853929 podStartE2EDuration="2.390853929s" podCreationTimestamp="2026-01-21 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:39:17.384520733 +0000 UTC m=+1849.606699213" watchObservedRunningTime="2026-01-21 21:39:17.390853929 +0000 UTC m=+1849.613032429" Jan 21 21:39:17 crc kubenswrapper[4860]: I0121 21:39:17.405819 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:18 crc kubenswrapper[4860]: I0121 21:39:18.632482 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.394706 4860 generic.go:334] "Generic (PLEG): container finished" podID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerID="2c5500ecec7bfb6ab766d4cb3c9b089c01ca805ec2b0e7e3aa9b1ddf400f49e4" exitCode=0 Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.394773 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerDied","Data":"2c5500ecec7bfb6ab766d4cb3c9b089c01ca805ec2b0e7e3aa9b1ddf400f49e4"} Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.676850 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.795793 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796198 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796244 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtskv\" (UniqueName: \"kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796268 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796306 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.796363 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle\") pod \"2159c959-e321-407a-9b5e-9e7a7a137a16\" (UID: \"2159c959-e321-407a-9b5e-9e7a7a137a16\") " Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.797344 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.797377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.814885 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv" (OuterVolumeSpecName: "kube-api-access-dtskv") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "kube-api-access-dtskv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.827163 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts" (OuterVolumeSpecName: "scripts") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.870558 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.892250 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.899612 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.899650 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtskv\" (UniqueName: \"kubernetes.io/projected/2159c959-e321-407a-9b5e-9e7a7a137a16-kube-api-access-dtskv\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.899660 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2159c959-e321-407a-9b5e-9e7a7a137a16-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.899670 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.899679 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.934804 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.954846 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data" (OuterVolumeSpecName: "config-data") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:19 crc kubenswrapper[4860]: I0121 21:39:19.970181 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2159c959-e321-407a-9b5e-9e7a7a137a16" (UID: "2159c959-e321-407a-9b5e-9e7a7a137a16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.001473 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.001888 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.001955 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2159c959-e321-407a-9b5e-9e7a7a137a16-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.410199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2159c959-e321-407a-9b5e-9e7a7a137a16","Type":"ContainerDied","Data":"d2bce750a7000a75ffe2e8767ee0a4f4db8ed36fe7f0932314204ea4815058cf"} Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.410289 4860 scope.go:117] "RemoveContainer" containerID="4fab2ecfebcf6ce1b7792b4677e12cb1d5b8e9e20dabbbe2f5eb8cc7ff7df311" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.410349 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.454166 4860 scope.go:117] "RemoveContainer" containerID="e20293595699bb46a4c4fb4d03ae645dc8aaf95cc1e594f048332bbc6818a197" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.467829 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.481838 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.487835 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:20 crc kubenswrapper[4860]: E0121 21:39:20.488339 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="sg-core" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488358 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="sg-core" Jan 21 21:39:20 crc kubenswrapper[4860]: E0121 21:39:20.488384 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-notification-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488393 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-notification-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: E0121 21:39:20.488414 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-central-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488421 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-central-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: E0121 21:39:20.488434 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="proxy-httpd" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488442 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="proxy-httpd" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488619 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="sg-core" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488632 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-central-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488644 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="ceilometer-notification-agent" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.488659 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" containerName="proxy-httpd" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.493018 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.498788 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.499160 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.499345 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.523787 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.543125 4860 scope.go:117] "RemoveContainer" containerID="2c5500ecec7bfb6ab766d4cb3c9b089c01ca805ec2b0e7e3aa9b1ddf400f49e4" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.568267 4860 scope.go:117] "RemoveContainer" containerID="66580bd3983b89d20d2f295e0a7b3b77b9d83de645c0bccaaf1c0bcc8bf1b145" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.591281 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2159c959-e321-407a-9b5e-9e7a7a137a16" path="/var/lib/kubelet/pods/2159c959-e321-407a-9b5e-9e7a7a137a16/volumes" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616138 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616215 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616385 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616408 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616438 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616462 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616486 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcwn\" (UniqueName: \"kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.616508 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718505 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718536 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718555 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718572 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcwn\" (UniqueName: \"kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718598 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718632 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.718661 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.719893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.720221 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.729415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.729595 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.730178 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.731047 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.743765 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.753968 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcwn\" (UniqueName: \"kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn\") pod \"ceilometer-0\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:20 crc kubenswrapper[4860]: I0121 21:39:20.812458 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:21 crc kubenswrapper[4860]: I0121 21:39:21.128540 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:21 crc kubenswrapper[4860]: I0121 21:39:21.350522 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:21 crc kubenswrapper[4860]: I0121 21:39:21.389811 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:21 crc kubenswrapper[4860]: I0121 21:39:21.423047 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerStarted","Data":"22beb3edd53371b522aca262c1501a27986c289b0657142c63869f4a33e4ef69"} Jan 21 21:39:21 crc kubenswrapper[4860]: I0121 21:39:21.452574 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:22 crc kubenswrapper[4860]: I0121 21:39:22.380779 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:22 crc kubenswrapper[4860]: I0121 21:39:22.436678 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerStarted","Data":"dc9539f52b73332e52647133d0fd5a80085a1167efe9b0a9d42e230d9d793ea5"} Jan 21 21:39:23 crc kubenswrapper[4860]: I0121 21:39:23.467451 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerStarted","Data":"e810c364dcb91c62c18804349a49688101173730d5c2948fe6927f837d12b2f4"} Jan 21 21:39:23 crc kubenswrapper[4860]: I0121 21:39:23.579260 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:39:23 crc kubenswrapper[4860]: E0121 21:39:23.580266 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:39:23 crc kubenswrapper[4860]: I0121 21:39:23.605798 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:24 crc kubenswrapper[4860]: I0121 21:39:24.478907 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerStarted","Data":"b21583037c173ca7261948ee8ebb57f622489f033aeff6d95a6154be4a01078d"} Jan 21 21:39:24 crc kubenswrapper[4860]: I0121 21:39:24.883559 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:25 crc kubenswrapper[4860]: I0121 21:39:25.491596 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerStarted","Data":"ecb9cd38ebe824a0013e1399f69a12f16e39016c459465729a38e0a9eff7b215"} Jan 21 21:39:25 crc kubenswrapper[4860]: I0121 21:39:25.492269 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:25 crc kubenswrapper[4860]: I0121 21:39:25.520928 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.792692528 podStartE2EDuration="5.520904133s" podCreationTimestamp="2026-01-21 21:39:20 +0000 UTC" firstStartedPulling="2026-01-21 21:39:21.360176379 +0000 UTC m=+1853.582354849" lastFinishedPulling="2026-01-21 21:39:25.088387984 +0000 UTC m=+1857.310566454" observedRunningTime="2026-01-21 21:39:25.515354101 +0000 UTC m=+1857.737532581" watchObservedRunningTime="2026-01-21 21:39:25.520904133 +0000 UTC m=+1857.743082603" Jan 21 21:39:25 crc kubenswrapper[4860]: I0121 21:39:25.736381 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:25 crc kubenswrapper[4860]: I0121 21:39:25.768446 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:26 crc kubenswrapper[4860]: I0121 21:39:26.090668 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:26 crc kubenswrapper[4860]: I0121 21:39:26.509894 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:26 crc kubenswrapper[4860]: I0121 21:39:26.540837 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.353469 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.635445 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.745554 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-nks5c"] Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.755423 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-nks5c"] Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.779937 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.780323 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="cinder-backup" containerID="cri-o://07ae7ceeda909c5127abdb8f6d33484fb13c99e6904bdcb255286bbb928af1d1" gracePeriod=30 Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.780900 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="probe" containerID="cri-o://fae2d8fe59ebe4f61d7317868185c325a868f0f3981a870d55bd4f25b1b35519" gracePeriod=30 Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.806655 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.807104 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="cinder-scheduler" containerID="cri-o://1ad27740ed618831be7a49f0315efe721a1ca108458b6ad711631f3c16c448d4" gracePeriod=30 Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.807786 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="probe" containerID="cri-o://df3df166931722411e4178d2d62ddca1b5a9e8f959c20f37923a984885d1918b" gracePeriod=30 Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.857772 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder19d1-account-delete-wmtvn"] Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.859573 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:27 crc kubenswrapper[4860]: I0121 21:39:27.896031 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder19d1-account-delete-wmtvn"] Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.025458 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.025630 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d4q7\" (UniqueName: \"kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.127443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.128055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d4q7\" (UniqueName: \"kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.128387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.168982 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d4q7\" (UniqueName: \"kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7\") pod \"cinder19d1-account-delete-wmtvn\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.196738 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.605704 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9197a5-7a88-494b-927d-5e3fc723d5e0" path="/var/lib/kubelet/pods/ba9197a5-7a88-494b-927d-5e3fc723d5e0/volumes" Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.772477 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder19d1-account-delete-wmtvn"] Jan 21 21:39:28 crc kubenswrapper[4860]: W0121 21:39:28.815561 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a50d91_ca3e_487f_9ae7_fbde57adf0ca.slice/crio-b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea WatchSource:0}: Error finding container b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea: Status 404 returned error can't find the container with id b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea Jan 21 21:39:28 crc kubenswrapper[4860]: I0121 21:39:28.901293 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.546973 4860 generic.go:334] "Generic (PLEG): container finished" podID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerID="fae2d8fe59ebe4f61d7317868185c325a868f0f3981a870d55bd4f25b1b35519" exitCode=0 Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.547146 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerDied","Data":"fae2d8fe59ebe4f61d7317868185c325a868f0f3981a870d55bd4f25b1b35519"} Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.558152 4860 generic.go:334] "Generic (PLEG): container finished" podID="501f7779-9761-4888-bcec-b19b7cede5ca" containerID="df3df166931722411e4178d2d62ddca1b5a9e8f959c20f37923a984885d1918b" exitCode=0 Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.558294 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerDied","Data":"df3df166931722411e4178d2d62ddca1b5a9e8f959c20f37923a984885d1918b"} Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.571882 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" event={"ID":"28a50d91-ca3e-487f-9ae7-fbde57adf0ca","Type":"ContainerDied","Data":"542f04e6c1e45c105233c40a3161f538aaaf447a3c4f3334557393fa81a669d3"} Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.571824 4860 generic.go:334] "Generic (PLEG): container finished" podID="28a50d91-ca3e-487f-9ae7-fbde57adf0ca" containerID="542f04e6c1e45c105233c40a3161f538aaaf447a3c4f3334557393fa81a669d3" exitCode=0 Jan 21 21:39:29 crc kubenswrapper[4860]: I0121 21:39:29.572175 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" event={"ID":"28a50d91-ca3e-487f-9ae7-fbde57adf0ca","Type":"ContainerStarted","Data":"b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea"} Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.001796 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.002458 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" containerName="watcher-decision-engine" containerID="cri-o://c6dfec913e0fd05d02c8944f36cf8269222ccd77ed0fb84830b2eb749600c35c" gracePeriod=30 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.138164 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.419377 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.419737 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-central-agent" containerID="cri-o://dc9539f52b73332e52647133d0fd5a80085a1167efe9b0a9d42e230d9d793ea5" gracePeriod=30 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.420311 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="proxy-httpd" containerID="cri-o://ecb9cd38ebe824a0013e1399f69a12f16e39016c459465729a38e0a9eff7b215" gracePeriod=30 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.420508 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-notification-agent" containerID="cri-o://e810c364dcb91c62c18804349a49688101173730d5c2948fe6927f837d12b2f4" gracePeriod=30 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.420770 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="sg-core" containerID="cri-o://b21583037c173ca7261948ee8ebb57f622489f033aeff6d95a6154be4a01078d" gracePeriod=30 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.592242 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerID="b21583037c173ca7261948ee8ebb57f622489f033aeff6d95a6154be4a01078d" exitCode=2 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.593631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerDied","Data":"b21583037c173ca7261948ee8ebb57f622489f033aeff6d95a6154be4a01078d"} Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.595761 4860 generic.go:334] "Generic (PLEG): container finished" podID="80181717-d115-418a-b9be-d17cc852e9ec" containerID="d8b4aef7e44b61bd6f15df66726f9bdaf3e361e9a43c1a27ef3598c38ac6eae5" exitCode=137 Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.595879 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerDied","Data":"d8b4aef7e44b61bd6f15df66726f9bdaf3e361e9a43c1a27ef3598c38ac6eae5"} Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.596025 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"80181717-d115-418a-b9be-d17cc852e9ec","Type":"ContainerDied","Data":"20ce40073327da9d82f148478972385365d52340bbe1a16e429073dd27add7de"} Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.596110 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ce40073327da9d82f148478972385365d52340bbe1a16e429073dd27add7de" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.854833 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.898712 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.898785 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.898887 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.898909 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.899081 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn7zx\" (UniqueName: \"kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.899154 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.899199 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.899231 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle\") pod \"80181717-d115-418a-b9be-d17cc852e9ec\" (UID: \"80181717-d115-418a-b9be-d17cc852e9ec\") " Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.904605 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.906747 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs" (OuterVolumeSpecName: "logs") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.944205 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.945592 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts" (OuterVolumeSpecName: "scripts") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:30 crc kubenswrapper[4860]: I0121 21:39:30.967074 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx" (OuterVolumeSpecName: "kube-api-access-qn7zx") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "kube-api-access-qn7zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.000564 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data" (OuterVolumeSpecName: "config-data") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005541 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005593 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80181717-d115-418a-b9be-d17cc852e9ec-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005604 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005616 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005626 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80181717-d115-418a-b9be-d17cc852e9ec-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.005635 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn7zx\" (UniqueName: \"kubernetes.io/projected/80181717-d115-418a-b9be-d17cc852e9ec-kube-api-access-qn7zx\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.022202 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.064393 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "80181717-d115-418a-b9be-d17cc852e9ec" (UID: "80181717-d115-418a-b9be-d17cc852e9ec"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.120361 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.120649 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/80181717-d115-418a-b9be-d17cc852e9ec-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.201482 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.324493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts\") pod \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.324691 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d4q7\" (UniqueName: \"kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7\") pod \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\" (UID: \"28a50d91-ca3e-487f-9ae7-fbde57adf0ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.325741 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28a50d91-ca3e-487f-9ae7-fbde57adf0ca" (UID: "28a50d91-ca3e-487f-9ae7-fbde57adf0ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.332790 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7" (OuterVolumeSpecName: "kube-api-access-5d4q7") pod "28a50d91-ca3e-487f-9ae7-fbde57adf0ca" (UID: "28a50d91-ca3e-487f-9ae7-fbde57adf0ca"). InnerVolumeSpecName "kube-api-access-5d4q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.408984 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.427109 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.427163 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d4q7\" (UniqueName: \"kubernetes.io/projected/28a50d91-ca3e-487f-9ae7-fbde57adf0ca-kube-api-access-5d4q7\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.612857 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" event={"ID":"28a50d91-ca3e-487f-9ae7-fbde57adf0ca","Type":"ContainerDied","Data":"b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.612911 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b167888bce357bc886cfb02397adf38e22ef449ed23ae2c7da59e18d661c06ea" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.612999 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder19d1-account-delete-wmtvn" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.617878 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerID="ecb9cd38ebe824a0013e1399f69a12f16e39016c459465729a38e0a9eff7b215" exitCode=0 Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.617918 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerID="e810c364dcb91c62c18804349a49688101173730d5c2948fe6927f837d12b2f4" exitCode=0 Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.617927 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerID="dc9539f52b73332e52647133d0fd5a80085a1167efe9b0a9d42e230d9d793ea5" exitCode=0 Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.618079 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerDied","Data":"ecb9cd38ebe824a0013e1399f69a12f16e39016c459465729a38e0a9eff7b215"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.618120 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerDied","Data":"e810c364dcb91c62c18804349a49688101173730d5c2948fe6927f837d12b2f4"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.618133 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerDied","Data":"dc9539f52b73332e52647133d0fd5a80085a1167efe9b0a9d42e230d9d793ea5"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.623011 4860 generic.go:334] "Generic (PLEG): container finished" podID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerID="07ae7ceeda909c5127abdb8f6d33484fb13c99e6904bdcb255286bbb928af1d1" exitCode=0 Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.623087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerDied","Data":"07ae7ceeda909c5127abdb8f6d33484fb13c99e6904bdcb255286bbb928af1d1"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.623125 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d9c8e109-4a77-4ee3-bc53-130f69698d16","Type":"ContainerDied","Data":"4d0249a9c2b0b7612124e9d0282313f6d68775e73a646f4b86fa520057f5921c"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.623142 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0249a9c2b0b7612124e9d0282313f6d68775e73a646f4b86fa520057f5921c" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.627032 4860 generic.go:334] "Generic (PLEG): container finished" podID="501f7779-9761-4888-bcec-b19b7cede5ca" containerID="1ad27740ed618831be7a49f0315efe721a1ca108458b6ad711631f3c16c448d4" exitCode=0 Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.627174 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.627249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerDied","Data":"1ad27740ed618831be7a49f0315efe721a1ca108458b6ad711631f3c16c448d4"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.627375 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"501f7779-9761-4888-bcec-b19b7cede5ca","Type":"ContainerDied","Data":"359ecdb8783fc18c38627ece7b1471a642737c314e1315a3dc0fdfa802e47259"} Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.627405 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359ecdb8783fc18c38627ece7b1471a642737c314e1315a3dc0fdfa802e47259" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.667464 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.680419 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.689007 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.707549 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734363 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734461 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734511 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflfn\" (UniqueName: \"kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734545 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734585 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.734616 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts\") pod \"501f7779-9761-4888-bcec-b19b7cede5ca\" (UID: \"501f7779-9761-4888-bcec-b19b7cede5ca\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.736163 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.751020 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.751243 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn" (OuterVolumeSpecName: "kube-api-access-kflfn") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "kube-api-access-kflfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.756867 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts" (OuterVolumeSpecName: "scripts") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836177 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836258 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836281 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836327 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836358 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836346 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836476 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836403 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys" (OuterVolumeSpecName: "sys") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836557 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836587 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnr7b\" (UniqueName: \"kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836592 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836658 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836679 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836779 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836818 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836847 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836865 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836886 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836928 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.836984 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.837004 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick\") pod \"d9c8e109-4a77-4ee3-bc53-130f69698d16\" (UID: \"d9c8e109-4a77-4ee3-bc53-130f69698d16\") " Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.837026 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.837057 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.837146 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run" (OuterVolumeSpecName: "run") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838170 4860 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838190 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838203 4860 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-sys\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838213 4860 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838222 4860 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/501f7779-9761-4888-bcec-b19b7cede5ca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838233 4860 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838242 4860 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838252 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kflfn\" (UniqueName: \"kubernetes.io/projected/501f7779-9761-4888-bcec-b19b7cede5ca-kube-api-access-kflfn\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838266 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838276 4860 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838285 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838295 4860 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-run\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838328 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.838353 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev" (OuterVolumeSpecName: "dev") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.841151 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts" (OuterVolumeSpecName: "scripts") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.851149 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b" (OuterVolumeSpecName: "kube-api-access-bnr7b") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "kube-api-access-bnr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.880379 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.883608 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940704 4860 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-dev\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940755 4860 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d9c8e109-4a77-4ee3-bc53-130f69698d16-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940775 4860 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940785 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnr7b\" (UniqueName: \"kubernetes.io/projected/d9c8e109-4a77-4ee3-bc53-130f69698d16-kube-api-access-bnr7b\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940794 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.940802 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.960184 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.960304 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data" (OuterVolumeSpecName: "config-data") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:31 crc kubenswrapper[4860]: I0121 21:39:31.993718 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.010191 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data" (OuterVolumeSpecName: "config-data") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.012872 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "501f7779-9761-4888-bcec-b19b7cede5ca" (UID: "501f7779-9761-4888-bcec-b19b7cede5ca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.042391 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.042858 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/501f7779-9761-4888-bcec-b19b7cede5ca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.042974 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.043059 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.082085 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d9c8e109-4a77-4ee3-bc53-130f69698d16" (UID: "d9c8e109-4a77-4ee3-bc53-130f69698d16"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.144768 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.144877 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.144981 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqcwn\" (UniqueName: \"kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145097 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145123 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145140 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145546 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d9c8e109-4a77-4ee3-bc53-130f69698d16-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.145677 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.146215 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.149013 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn" (OuterVolumeSpecName: "kube-api-access-hqcwn") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "kube-api-access-hqcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.149124 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts" (OuterVolumeSpecName: "scripts") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.170216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.208085 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.234878 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.246271 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data" (OuterVolumeSpecName: "config-data") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.246600 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") pod \"cd8f1ed8-22ff-4839-b3da-6556980904b8\" (UID: \"cd8f1ed8-22ff-4839-b3da-6556980904b8\") " Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247170 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247194 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247208 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247219 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247230 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqcwn\" (UniqueName: \"kubernetes.io/projected/cd8f1ed8-22ff-4839-b3da-6556980904b8-kube-api-access-hqcwn\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247243 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8f1ed8-22ff-4839-b3da-6556980904b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247253 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: W0121 21:39:32.247375 4860 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cd8f1ed8-22ff-4839-b3da-6556980904b8/volumes/kubernetes.io~secret/config-data Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.247399 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data" (OuterVolumeSpecName: "config-data") pod "cd8f1ed8-22ff-4839-b3da-6556980904b8" (UID: "cd8f1ed8-22ff-4839-b3da-6556980904b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.349359 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8f1ed8-22ff-4839-b3da-6556980904b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.614020 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80181717-d115-418a-b9be-d17cc852e9ec" path="/var/lib/kubelet/pods/80181717-d115-418a-b9be-d17cc852e9ec/volumes" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.644228 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd8f1ed8-22ff-4839-b3da-6556980904b8","Type":"ContainerDied","Data":"22beb3edd53371b522aca262c1501a27986c289b0657142c63869f4a33e4ef69"} Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.644348 4860 scope.go:117] "RemoveContainer" containerID="ecb9cd38ebe824a0013e1399f69a12f16e39016c459465729a38e0a9eff7b215" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.644408 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.644475 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.644483 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.661466 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.681830 4860 scope.go:117] "RemoveContainer" containerID="b21583037c173ca7261948ee8ebb57f622489f033aeff6d95a6154be4a01078d" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.697222 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.713300 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.721204 4860 scope.go:117] "RemoveContainer" containerID="e810c364dcb91c62c18804349a49688101173730d5c2948fe6927f837d12b2f4" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.729753 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.758184 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.759290 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-central-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.759908 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-central-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.760182 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.760337 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.770832 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="proxy-httpd" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.771589 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="proxy-httpd" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.771688 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="cinder-backup" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.771776 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="cinder-backup" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.771869 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="cinder-scheduler" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.773763 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="cinder-scheduler" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.773855 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a50d91-ca3e-487f-9ae7-fbde57adf0ca" containerName="mariadb-account-delete" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.773960 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a50d91-ca3e-487f-9ae7-fbde57adf0ca" containerName="mariadb-account-delete" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.774086 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.774887 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.776881 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.777325 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.777468 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-notification-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.777548 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-notification-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.777823 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api-log" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.777899 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api-log" Jan 21 21:39:32 crc kubenswrapper[4860]: E0121 21:39:32.777997 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="sg-core" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.778071 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="sg-core" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.770338 4860 scope.go:117] "RemoveContainer" containerID="dc9539f52b73332e52647133d0fd5a80085a1167efe9b0a9d42e230d9d793ea5" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.781598 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api-log" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.781806 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="proxy-httpd" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.784986 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785095 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="sg-core" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785174 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-central-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785256 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="probe" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785367 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" containerName="cinder-scheduler" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785447 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="80181717-d115-418a-b9be-d17cc852e9ec" containerName="cinder-api" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785528 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" containerName="cinder-backup" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785606 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a50d91-ca3e-487f-9ae7-fbde57adf0ca" containerName="mariadb-account-delete" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.785688 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" containerName="ceilometer-notification-agent" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.789654 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.789850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.790182 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.790521 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.794469 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.794852 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.800659 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.809966 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868261 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868372 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868426 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868531 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2bml\" (UniqueName: \"kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868674 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.868733 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.894048 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-nspmr"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.909231 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-nspmr"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.917945 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.925268 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder19d1-account-delete-wmtvn"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.931560 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-19d1-account-create-update-ms5wz"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.937802 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder19d1-account-delete-wmtvn"] Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.970858 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.970947 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.970986 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.971023 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.971055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.971087 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.971108 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.971155 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2bml\" (UniqueName: \"kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.972167 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.972523 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.977611 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.978378 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.978874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.979891 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.981496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:32 crc kubenswrapper[4860]: I0121 21:39:32.989757 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2bml\" (UniqueName: \"kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml\") pod \"ceilometer-0\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:33 crc kubenswrapper[4860]: I0121 21:39:33.117390 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:33 crc kubenswrapper[4860]: I0121 21:39:33.655419 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:33 crc kubenswrapper[4860]: W0121 21:39:33.660526 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86227605_09b1_4487_a65b_bb35de55fae1.slice/crio-c40251770fe75bb95b0e6d25865ded482e2423a7561f7747e57a34b06fd12515 WatchSource:0}: Error finding container c40251770fe75bb95b0e6d25865ded482e2423a7561f7747e57a34b06fd12515: Status 404 returned error can't find the container with id c40251770fe75bb95b0e6d25865ded482e2423a7561f7747e57a34b06fd12515 Jan 21 21:39:33 crc kubenswrapper[4860]: I0121 21:39:33.904641 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.592230 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a50d91-ca3e-487f-9ae7-fbde57adf0ca" path="/var/lib/kubelet/pods/28a50d91-ca3e-487f-9ae7-fbde57adf0ca/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.593373 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3250668b-0249-48e2-b1a7-def619c72d7c" path="/var/lib/kubelet/pods/3250668b-0249-48e2-b1a7-def619c72d7c/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.593951 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501f7779-9761-4888-bcec-b19b7cede5ca" path="/var/lib/kubelet/pods/501f7779-9761-4888-bcec-b19b7cede5ca/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.595103 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639aa53e-95ce-499f-a6af-f6ffb3d07f31" path="/var/lib/kubelet/pods/639aa53e-95ce-499f-a6af-f6ffb3d07f31/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.595637 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8f1ed8-22ff-4839-b3da-6556980904b8" path="/var/lib/kubelet/pods/cd8f1ed8-22ff-4839-b3da-6556980904b8/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.596387 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c8e109-4a77-4ee3-bc53-130f69698d16" path="/var/lib/kubelet/pods/d9c8e109-4a77-4ee3-bc53-130f69698d16/volumes" Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.666422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerStarted","Data":"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926"} Jan 21 21:39:34 crc kubenswrapper[4860]: I0121 21:39:34.666482 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerStarted","Data":"c40251770fe75bb95b0e6d25865ded482e2423a7561f7747e57a34b06fd12515"} Jan 21 21:39:35 crc kubenswrapper[4860]: I0121 21:39:35.123405 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:35 crc kubenswrapper[4860]: I0121 21:39:35.580805 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:39:35 crc kubenswrapper[4860]: I0121 21:39:35.681796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerStarted","Data":"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006"} Jan 21 21:39:36 crc kubenswrapper[4860]: I0121 21:39:36.392146 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:36 crc kubenswrapper[4860]: I0121 21:39:36.695294 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerStarted","Data":"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd"} Jan 21 21:39:36 crc kubenswrapper[4860]: I0121 21:39:36.702307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975"} Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.052729 4860 scope.go:117] "RemoveContainer" containerID="a58d16d21c8247aa169e2b1c67f46234d9e2e2bd391821f34370ea0c1cda09e9" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.087299 4860 scope.go:117] "RemoveContainer" containerID="3239fcd0150afce419796a6fda8adf4ac71a4dc43a117cef2ced29c08aa29aeb" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.228820 4860 scope.go:117] "RemoveContainer" containerID="c8d0df3e3bc86d46d44ef7633ba86773c1b75930aa8b3b363f80c7b1015f16b9" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.300116 4860 scope.go:117] "RemoveContainer" containerID="e7f7edb7f4948013fd49a01c78713fafd82849786f78b3e94dca0e23b5b102d9" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.619984 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.719650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerStarted","Data":"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44"} Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.721362 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:37 crc kubenswrapper[4860]: I0121 21:39:37.764536 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.370636541 podStartE2EDuration="5.764506153s" podCreationTimestamp="2026-01-21 21:39:32 +0000 UTC" firstStartedPulling="2026-01-21 21:39:33.663688142 +0000 UTC m=+1865.885866612" lastFinishedPulling="2026-01-21 21:39:37.057557754 +0000 UTC m=+1869.279736224" observedRunningTime="2026-01-21 21:39:37.746489604 +0000 UTC m=+1869.968668094" watchObservedRunningTime="2026-01-21 21:39:37.764506153 +0000 UTC m=+1869.986684623" Jan 21 21:39:38 crc kubenswrapper[4860]: I0121 21:39:38.864879 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.771603 4860 generic.go:334] "Generic (PLEG): container finished" podID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" containerID="c6dfec913e0fd05d02c8944f36cf8269222ccd77ed0fb84830b2eb749600c35c" exitCode=0 Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.771756 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3d34481-e759-4c8e-a1a9-43b9ee574f6c","Type":"ContainerDied","Data":"c6dfec913e0fd05d02c8944f36cf8269222ccd77ed0fb84830b2eb749600c35c"} Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.919639 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954045 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954273 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954342 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954374 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.954417 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7pkp\" (UniqueName: \"kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp\") pod \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\" (UID: \"e3d34481-e759-4c8e-a1a9-43b9ee574f6c\") " Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.955800 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs" (OuterVolumeSpecName: "logs") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:39 crc kubenswrapper[4860]: I0121 21:39:39.969576 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp" (OuterVolumeSpecName: "kube-api-access-g7pkp") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "kube-api-access-g7pkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.009404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.044177 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data" (OuterVolumeSpecName: "config-data") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.057592 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.057641 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.057655 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.057670 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7pkp\" (UniqueName: \"kubernetes.io/projected/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-kube-api-access-g7pkp\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.057796 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.078303 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e3d34481-e759-4c8e-a1a9-43b9ee574f6c" (UID: "e3d34481-e759-4c8e-a1a9-43b9ee574f6c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.124707 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e3d34481-e759-4c8e-a1a9-43b9ee574f6c/watcher-decision-engine/0.log" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.159338 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.159401 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e3d34481-e759-4c8e-a1a9-43b9ee574f6c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.784060 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e3d34481-e759-4c8e-a1a9-43b9ee574f6c","Type":"ContainerDied","Data":"2305756bd885de1f06b96687d12f462395d186a7b464fa1d11d2d8338ac3ae8d"} Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.784165 4860 scope.go:117] "RemoveContainer" containerID="c6dfec913e0fd05d02c8944f36cf8269222ccd77ed0fb84830b2eb749600c35c" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.784358 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.819866 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.826222 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.855209 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:40 crc kubenswrapper[4860]: E0121 21:39:40.855732 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" containerName="watcher-decision-engine" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.855760 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" containerName="watcher-decision-engine" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.856105 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" containerName="watcher-decision-engine" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.857067 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.864270 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.869798 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.972867 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.972972 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.973025 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.973052 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.973113 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcdl\" (UniqueName: \"kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:40 crc kubenswrapper[4860]: I0121 21:39:40.973136 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.075647 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.075778 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcdl\" (UniqueName: \"kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.075817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.075882 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.075963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.076014 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.076650 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.086522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.086597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.086608 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.087361 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.103509 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcdl\" (UniqueName: \"kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.182204 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.717895 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:41 crc kubenswrapper[4860]: I0121 21:39:41.838545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"51ed7e08-65d4-4e78-8f05-3974349dc260","Type":"ContainerStarted","Data":"a654109ead75eefed97d53462dfb6c94d1d2f1bc1983d0f7f9e5e62e61261738"} Jan 21 21:39:42 crc kubenswrapper[4860]: I0121 21:39:42.591635 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d34481-e759-4c8e-a1a9-43b9ee574f6c" path="/var/lib/kubelet/pods/e3d34481-e759-4c8e-a1a9-43b9ee574f6c/volumes" Jan 21 21:39:42 crc kubenswrapper[4860]: I0121 21:39:42.849195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"51ed7e08-65d4-4e78-8f05-3974349dc260","Type":"ContainerStarted","Data":"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69"} Jan 21 21:39:43 crc kubenswrapper[4860]: I0121 21:39:43.727657 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:45 crc kubenswrapper[4860]: I0121 21:39:45.130207 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:46 crc kubenswrapper[4860]: I0121 21:39:46.317249 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:47 crc kubenswrapper[4860]: I0121 21:39:47.562172 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:48 crc kubenswrapper[4860]: I0121 21:39:48.790607 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:50 crc kubenswrapper[4860]: I0121 21:39:50.207851 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:51 crc kubenswrapper[4860]: I0121 21:39:51.183023 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:51 crc kubenswrapper[4860]: I0121 21:39:51.215293 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:51 crc kubenswrapper[4860]: I0121 21:39:51.247753 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=11.247654815 podStartE2EDuration="11.247654815s" podCreationTimestamp="2026-01-21 21:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:39:42.877105595 +0000 UTC m=+1875.099284085" watchObservedRunningTime="2026-01-21 21:39:51.247654815 +0000 UTC m=+1883.469833285" Jan 21 21:39:51 crc kubenswrapper[4860]: I0121 21:39:51.525979 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:52 crc kubenswrapper[4860]: I0121 21:39:52.035004 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:52 crc kubenswrapper[4860]: I0121 21:39:52.081493 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.012438 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_51ed7e08-65d4-4e78-8f05-3974349dc260/watcher-decision-engine/0.log" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.278011 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-982rj"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.285895 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-982rj"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.332675 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchered96-account-delete-wcxvm"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.335207 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.347144 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchered96-account-delete-wcxvm"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.373956 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.431009 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.431424 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" containerID="cri-o://05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" gracePeriod=30 Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.499145 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.499292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv76v\" (UniqueName: \"kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.517158 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.517571 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-kuttl-api-log" containerID="cri-o://42807b3b9f4026ed8514be6715a097d7a897eab8c7d15bfef10fa01bc87822b0" gracePeriod=30 Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.517870 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" containerID="cri-o://dda1dd83abe1be5cd33a9e38a5602b6e3bae8ec487870a0db4416574e83a4965" gracePeriod=30 Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.601410 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.601535 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv76v\" (UniqueName: \"kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.603026 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.638641 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv76v\" (UniqueName: \"kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v\") pod \"watchered96-account-delete-wcxvm\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: I0121 21:39:53.659197 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:53 crc kubenswrapper[4860]: E0121 21:39:53.748244 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:53 crc kubenswrapper[4860]: E0121 21:39:53.768713 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:53 crc kubenswrapper[4860]: E0121 21:39:53.774596 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:53 crc kubenswrapper[4860]: E0121 21:39:53.774660 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.072377 4860 generic.go:334] "Generic (PLEG): container finished" podID="01200251-c652-48cd-ac68-c422cd325f71" containerID="42807b3b9f4026ed8514be6715a097d7a897eab8c7d15bfef10fa01bc87822b0" exitCode=143 Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.072470 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerDied","Data":"42807b3b9f4026ed8514be6715a097d7a897eab8c7d15bfef10fa01bc87822b0"} Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.073307 4860 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-57bm7\" not found" Jan 21 21:39:54 crc kubenswrapper[4860]: E0121 21:39:54.219165 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:54 crc kubenswrapper[4860]: E0121 21:39:54.219448 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data podName:51ed7e08-65d4-4e78-8f05-3974349dc260 nodeName:}" failed. No retries permitted until 2026-01-21 21:39:54.71936523 +0000 UTC m=+1886.941543700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.251204 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchered96-account-delete-wcxvm"] Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.593345 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943e71b2-4f7f-4746-8e43-ae9f9ddab819" path="/var/lib/kubelet/pods/943e71b2-4f7f-4746-8e43-ae9f9ddab819/volumes" Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.686379 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.182:9322/\": read tcp 10.217.0.2:39196->10.217.0.182:9322: read: connection reset by peer" Jan 21 21:39:54 crc kubenswrapper[4860]: I0121 21:39:54.686369 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9322/\": read tcp 10.217.0.2:39212->10.217.0.182:9322: read: connection reset by peer" Jan 21 21:39:54 crc kubenswrapper[4860]: E0121 21:39:54.733483 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:54 crc kubenswrapper[4860]: E0121 21:39:54.733568 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data podName:51ed7e08-65d4-4e78-8f05-3974349dc260 nodeName:}" failed. No retries permitted until 2026-01-21 21:39:55.733549489 +0000 UTC m=+1887.955727959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.095386 4860 generic.go:334] "Generic (PLEG): container finished" podID="01200251-c652-48cd-ac68-c422cd325f71" containerID="dda1dd83abe1be5cd33a9e38a5602b6e3bae8ec487870a0db4416574e83a4965" exitCode=0 Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.095475 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerDied","Data":"dda1dd83abe1be5cd33a9e38a5602b6e3bae8ec487870a0db4416574e83a4965"} Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.099246 4860 generic.go:334] "Generic (PLEG): container finished" podID="416d2417-81b6-4ccc-b775-3b403eda7a74" containerID="92cb3f8317d559d8e2324a7668f63d93ea006dcd5d5febc2332fdaa33781c07c" exitCode=0 Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.099557 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="51ed7e08-65d4-4e78-8f05-3974349dc260" containerName="watcher-decision-engine" containerID="cri-o://76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69" gracePeriod=30 Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.100087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" event={"ID":"416d2417-81b6-4ccc-b775-3b403eda7a74","Type":"ContainerDied","Data":"92cb3f8317d559d8e2324a7668f63d93ea006dcd5d5febc2332fdaa33781c07c"} Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.100130 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" event={"ID":"416d2417-81b6-4ccc-b775-3b403eda7a74","Type":"ContainerStarted","Data":"35bba8ee32f0ca9ebadf1a9f20a3822ba8006112980e21200388a18585eba494"} Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.241553 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.339783 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.339848 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.340076 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx7fz\" (UniqueName: \"kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.340118 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.340139 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.340212 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs\") pod \"01200251-c652-48cd-ac68-c422cd325f71\" (UID: \"01200251-c652-48cd-ac68-c422cd325f71\") " Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.341879 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs" (OuterVolumeSpecName: "logs") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.348811 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz" (OuterVolumeSpecName: "kube-api-access-cx7fz") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "kube-api-access-cx7fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.371254 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.377552 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.395200 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data" (OuterVolumeSpecName: "config-data") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.436990 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "01200251-c652-48cd-ac68-c422cd325f71" (UID: "01200251-c652-48cd-ac68-c422cd325f71"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446624 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx7fz\" (UniqueName: \"kubernetes.io/projected/01200251-c652-48cd-ac68-c422cd325f71-kube-api-access-cx7fz\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446675 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446687 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446697 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01200251-c652-48cd-ac68-c422cd325f71-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446709 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: I0121 21:39:55.446719 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01200251-c652-48cd-ac68-c422cd325f71-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:55 crc kubenswrapper[4860]: E0121 21:39:55.754210 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:55 crc kubenswrapper[4860]: E0121 21:39:55.754336 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data podName:51ed7e08-65d4-4e78-8f05-3974349dc260 nodeName:}" failed. No retries permitted until 2026-01-21 21:39:57.754314629 +0000 UTC m=+1889.976493099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.113031 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"01200251-c652-48cd-ac68-c422cd325f71","Type":"ContainerDied","Data":"a06f063ee075c07b034eb06e1ab62aee166382400ed9ea9b1543346c91cf4ed3"} Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.114156 4860 scope.go:117] "RemoveContainer" containerID="dda1dd83abe1be5cd33a9e38a5602b6e3bae8ec487870a0db4416574e83a4965" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.113234 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.156205 4860 scope.go:117] "RemoveContainer" containerID="42807b3b9f4026ed8514be6715a097d7a897eab8c7d15bfef10fa01bc87822b0" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.168592 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.178886 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.592881 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01200251-c652-48cd-ac68-c422cd325f71" path="/var/lib/kubelet/pods/01200251-c652-48cd-ac68-c422cd325f71/volumes" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.650957 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.730538 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.730984 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-central-agent" containerID="cri-o://b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926" gracePeriod=30 Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.731206 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="proxy-httpd" containerID="cri-o://45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44" gracePeriod=30 Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.731256 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="sg-core" containerID="cri-o://04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd" gracePeriod=30 Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.731297 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-notification-agent" containerID="cri-o://b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006" gracePeriod=30 Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.785155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv76v\" (UniqueName: \"kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v\") pod \"416d2417-81b6-4ccc-b775-3b403eda7a74\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.785356 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts\") pod \"416d2417-81b6-4ccc-b775-3b403eda7a74\" (UID: \"416d2417-81b6-4ccc-b775-3b403eda7a74\") " Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.786447 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "416d2417-81b6-4ccc-b775-3b403eda7a74" (UID: "416d2417-81b6-4ccc-b775-3b403eda7a74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.798530 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v" (OuterVolumeSpecName: "kube-api-access-xv76v") pod "416d2417-81b6-4ccc-b775-3b403eda7a74" (UID: "416d2417-81b6-4ccc-b775-3b403eda7a74"). InnerVolumeSpecName "kube-api-access-xv76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.798671 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.887795 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv76v\" (UniqueName: \"kubernetes.io/projected/416d2417-81b6-4ccc-b775-3b403eda7a74-kube-api-access-xv76v\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:56 crc kubenswrapper[4860]: I0121 21:39:56.887860 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/416d2417-81b6-4ccc-b775-3b403eda7a74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.129854 4860 generic.go:334] "Generic (PLEG): container finished" podID="86227605-09b1-4487-a65b-bb35de55fae1" containerID="45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44" exitCode=0 Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.129907 4860 generic.go:334] "Generic (PLEG): container finished" podID="86227605-09b1-4487-a65b-bb35de55fae1" containerID="04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd" exitCode=2 Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.129921 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerDied","Data":"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44"} Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.130063 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerDied","Data":"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd"} Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.132325 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" event={"ID":"416d2417-81b6-4ccc-b775-3b403eda7a74","Type":"ContainerDied","Data":"35bba8ee32f0ca9ebadf1a9f20a3822ba8006112980e21200388a18585eba494"} Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.132379 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchered96-account-delete-wcxvm" Jan 21 21:39:57 crc kubenswrapper[4860]: I0121 21:39:57.132390 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35bba8ee32f0ca9ebadf1a9f20a3822ba8006112980e21200388a18585eba494" Jan 21 21:39:57 crc kubenswrapper[4860]: E0121 21:39:57.276435 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416d2417_81b6_4ccc_b775_3b403eda7a74.slice/crio-35bba8ee32f0ca9ebadf1a9f20a3822ba8006112980e21200388a18585eba494\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416d2417_81b6_4ccc_b775_3b403eda7a74.slice\": RecentStats: unable to find data in memory cache]" Jan 21 21:39:57 crc kubenswrapper[4860]: E0121 21:39:57.808339 4860 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:57 crc kubenswrapper[4860]: E0121 21:39:57.808815 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data podName:51ed7e08-65d4-4e78-8f05-3974349dc260 nodeName:}" failed. No retries permitted until 2026-01-21 21:40:01.80879488 +0000 UTC m=+1894.030973350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.150838 4860 generic.go:334] "Generic (PLEG): container finished" podID="86227605-09b1-4487-a65b-bb35de55fae1" containerID="b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926" exitCode=0 Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.150924 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerDied","Data":"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926"} Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.381743 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-97wqz"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.392466 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-97wqz"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.403583 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.414469 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchered96-account-delete-wcxvm"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.425627 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-ed96-account-create-update-4wj9p"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.435398 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchered96-account-delete-wcxvm"] Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.596926 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2829057d-3b80-46eb-af3b-b2132c283963" path="/var/lib/kubelet/pods/2829057d-3b80-46eb-af3b-b2132c283963/volumes" Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.598049 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416d2417-81b6-4ccc-b775-3b403eda7a74" path="/var/lib/kubelet/pods/416d2417-81b6-4ccc-b775-3b403eda7a74/volumes" Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.598623 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7292d6f-3ae7-456f-959a-58631b49ec0d" path="/var/lib/kubelet/pods/f7292d6f-3ae7-456f-959a-58631b49ec0d/volumes" Jan 21 21:39:58 crc kubenswrapper[4860]: E0121 21:39:58.731588 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff is running failed: container process not found" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:58 crc kubenswrapper[4860]: E0121 21:39:58.732353 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff is running failed: container process not found" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:58 crc kubenswrapper[4860]: E0121 21:39:58.732792 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff is running failed: container process not found" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:39:58 crc kubenswrapper[4860]: E0121 21:39:58.732839 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff is running failed: container process not found" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.804554 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.932641 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls\") pod \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.932784 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs\") pod \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.932820 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data\") pod \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.932973 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntv5r\" (UniqueName: \"kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r\") pod \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.933006 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle\") pod \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\" (UID: \"b10a74e6-0097-4e91-9d5b-72169c3ffc36\") " Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.934963 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs" (OuterVolumeSpecName: "logs") pod "b10a74e6-0097-4e91-9d5b-72169c3ffc36" (UID: "b10a74e6-0097-4e91-9d5b-72169c3ffc36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:58 crc kubenswrapper[4860]: I0121 21:39:58.942247 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r" (OuterVolumeSpecName: "kube-api-access-ntv5r") pod "b10a74e6-0097-4e91-9d5b-72169c3ffc36" (UID: "b10a74e6-0097-4e91-9d5b-72169c3ffc36"). InnerVolumeSpecName "kube-api-access-ntv5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.007026 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10a74e6-0097-4e91-9d5b-72169c3ffc36" (UID: "b10a74e6-0097-4e91-9d5b-72169c3ffc36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.032110 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b10a74e6-0097-4e91-9d5b-72169c3ffc36" (UID: "b10a74e6-0097-4e91-9d5b-72169c3ffc36"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.034927 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.034961 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10a74e6-0097-4e91-9d5b-72169c3ffc36-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.034974 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntv5r\" (UniqueName: \"kubernetes.io/projected/b10a74e6-0097-4e91-9d5b-72169c3ffc36-kube-api-access-ntv5r\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.034988 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.035999 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data" (OuterVolumeSpecName: "config-data") pod "b10a74e6-0097-4e91-9d5b-72169c3ffc36" (UID: "b10a74e6-0097-4e91-9d5b-72169c3ffc36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.137353 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10a74e6-0097-4e91-9d5b-72169c3ffc36-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.159196 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.169169 4860 generic.go:334] "Generic (PLEG): container finished" podID="86227605-09b1-4487-a65b-bb35de55fae1" containerID="b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006" exitCode=0 Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.169236 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerDied","Data":"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006"} Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.169269 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"86227605-09b1-4487-a65b-bb35de55fae1","Type":"ContainerDied","Data":"c40251770fe75bb95b0e6d25865ded482e2423a7561f7747e57a34b06fd12515"} Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.169293 4860 scope.go:117] "RemoveContainer" containerID="45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.169428 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.175980 4860 generic.go:334] "Generic (PLEG): container finished" podID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" exitCode=0 Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.176057 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b10a74e6-0097-4e91-9d5b-72169c3ffc36","Type":"ContainerDied","Data":"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff"} Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.176125 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.176148 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b10a74e6-0097-4e91-9d5b-72169c3ffc36","Type":"ContainerDied","Data":"68c7ea0d63112ae7f7cf93e1a4badb018ac773255e68806515eeb93ce73d3b00"} Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.209536 4860 scope.go:117] "RemoveContainer" containerID="04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.231317 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.242433 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.262565 4860 scope.go:117] "RemoveContainer" containerID="b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.313806 4860 scope.go:117] "RemoveContainer" containerID="b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.339010 4860 scope.go:117] "RemoveContainer" containerID="45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.340120 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44\": container with ID starting with 45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44 not found: ID does not exist" containerID="45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.340188 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44"} err="failed to get container status \"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44\": rpc error: code = NotFound desc = could not find container \"45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44\": container with ID starting with 45e1b6bfbd66c337b00525bee9f5f13b7266966e890328b1fdf2c35cecbbdf44 not found: ID does not exist" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.340227 4860 scope.go:117] "RemoveContainer" containerID="04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.340671 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd\": container with ID starting with 04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd not found: ID does not exist" containerID="04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.340719 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd"} err="failed to get container status \"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd\": rpc error: code = NotFound desc = could not find container \"04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd\": container with ID starting with 04822a1e765cc3256b5d230ae21059252bd4ea38c4e8d8731a0143b74f0996dd not found: ID does not exist" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.340742 4860 scope.go:117] "RemoveContainer" containerID="b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341022 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341221 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341349 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.341113 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006\": container with ID starting with b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006 not found: ID does not exist" containerID="b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341638 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2bml\" (UniqueName: \"kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341805 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.345853 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.346122 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.346473 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts\") pod \"86227605-09b1-4487-a65b-bb35de55fae1\" (UID: \"86227605-09b1-4487-a65b-bb35de55fae1\") " Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.346974 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.347543 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.341629 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006"} err="failed to get container status \"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006\": rpc error: code = NotFound desc = could not find container \"b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006\": container with ID starting with b44c1fb0eab88ffa867241b5f9729867a625245a4cf82bbe0dd71dd66f47c006 not found: ID does not exist" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.347604 4860 scope.go:117] "RemoveContainer" containerID="b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.348178 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926\": container with ID starting with b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926 not found: ID does not exist" containerID="b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.348216 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926"} err="failed to get container status \"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926\": rpc error: code = NotFound desc = could not find container \"b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926\": container with ID starting with b41aed78a4b41dc5762a4e6a3a1952dafa8cf6b588b58983a06c4f1c6e6ec926 not found: ID does not exist" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.348257 4860 scope.go:117] "RemoveContainer" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.366821 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.366925 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86227605-09b1-4487-a65b-bb35de55fae1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.368297 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts" (OuterVolumeSpecName: "scripts") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.369169 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml" (OuterVolumeSpecName: "kube-api-access-v2bml") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "kube-api-access-v2bml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.404153 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.423521 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.443458 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.472791 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.472867 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.472880 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.472893 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.472905 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2bml\" (UniqueName: \"kubernetes.io/projected/86227605-09b1-4487-a65b-bb35de55fae1-kube-api-access-v2bml\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.492906 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data" (OuterVolumeSpecName: "config-data") pod "86227605-09b1-4487-a65b-bb35de55fae1" (UID: "86227605-09b1-4487-a65b-bb35de55fae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.533912 4860 scope.go:117] "RemoveContainer" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.534809 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff\": container with ID starting with 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff not found: ID does not exist" containerID="05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.534885 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff"} err="failed to get container status \"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff\": rpc error: code = NotFound desc = could not find container \"05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff\": container with ID starting with 05e5a5c31214820f9b32fde0ee69720aacc942a52273464809328e68f19ab8ff not found: ID does not exist" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.575440 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86227605-09b1-4487-a65b-bb35de55fae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.809782 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.824330 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.862884 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.863979 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-notification-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.864235 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-notification-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.864337 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-central-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.864418 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-central-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.864546 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="sg-core" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.864620 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="sg-core" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.864694 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.864764 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.864835 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="proxy-httpd" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.864906 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="proxy-httpd" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.865013 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416d2417-81b6-4ccc-b775-3b403eda7a74" containerName="mariadb-account-delete" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.865096 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="416d2417-81b6-4ccc-b775-3b403eda7a74" containerName="mariadb-account-delete" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.865201 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.865279 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" Jan 21 21:39:59 crc kubenswrapper[4860]: E0121 21:39:59.865358 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-kuttl-api-log" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.865426 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-kuttl-api-log" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.865800 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="sg-core" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.865903 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-api" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866009 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-notification-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866090 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="01200251-c652-48cd-ac68-c422cd325f71" containerName="watcher-kuttl-api-log" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866168 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="proxy-httpd" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866249 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="416d2417-81b6-4ccc-b775-3b403eda7a74" containerName="mariadb-account-delete" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866327 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" containerName="watcher-applier" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.866420 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="86227605-09b1-4487-a65b-bb35de55fae1" containerName="ceilometer-central-agent" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.874671 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.880562 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.880850 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.881017 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.881222 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991341 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991403 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991435 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991504 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991571 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:39:59 crc kubenswrapper[4860]: I0121 21:39:59.991666 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrz8k\" (UniqueName: \"kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.093318 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.093671 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.093819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.093963 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.094087 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.094190 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.094358 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.095078 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrz8k\" (UniqueName: \"kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.094986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.094728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.101361 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.101512 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.101827 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.104806 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.105770 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.116727 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrz8k\" (UniqueName: \"kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k\") pod \"ceilometer-0\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.200053 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.598767 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86227605-09b1-4487-a65b-bb35de55fae1" path="/var/lib/kubelet/pods/86227605-09b1-4487-a65b-bb35de55fae1/volumes" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.600820 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10a74e6-0097-4e91-9d5b-72169c3ffc36" path="/var/lib/kubelet/pods/b10a74e6-0097-4e91-9d5b-72169c3ffc36/volumes" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.730268 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.874046 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912412 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkcdl\" (UniqueName: \"kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912530 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912664 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912693 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.912850 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs\") pod \"51ed7e08-65d4-4e78-8f05-3974349dc260\" (UID: \"51ed7e08-65d4-4e78-8f05-3974349dc260\") " Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.913901 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs" (OuterVolumeSpecName: "logs") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.919404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl" (OuterVolumeSpecName: "kube-api-access-dkcdl") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "kube-api-access-dkcdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.943573 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.944623 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:00 crc kubenswrapper[4860]: I0121 21:40:00.965754 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data" (OuterVolumeSpecName: "config-data") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.008482 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "51ed7e08-65d4-4e78-8f05-3974349dc260" (UID: "51ed7e08-65d4-4e78-8f05-3974349dc260"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015231 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015295 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed7e08-65d4-4e78-8f05-3974349dc260-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015308 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkcdl\" (UniqueName: \"kubernetes.io/projected/51ed7e08-65d4-4e78-8f05-3974349dc260-kube-api-access-dkcdl\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015428 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015446 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.015459 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/51ed7e08-65d4-4e78-8f05-3974349dc260-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.202188 4860 generic.go:334] "Generic (PLEG): container finished" podID="51ed7e08-65d4-4e78-8f05-3974349dc260" containerID="76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69" exitCode=0 Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.202283 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"51ed7e08-65d4-4e78-8f05-3974349dc260","Type":"ContainerDied","Data":"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69"} Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.202329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"51ed7e08-65d4-4e78-8f05-3974349dc260","Type":"ContainerDied","Data":"a654109ead75eefed97d53462dfb6c94d1d2f1bc1983d0f7f9e5e62e61261738"} Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.202317 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.202404 4860 scope.go:117] "RemoveContainer" containerID="76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.208007 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerStarted","Data":"8b54c3c64a83dd7e8985ae4dbaef01db52e27b6a15750b737d9b2e6fd0619f2a"} Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.234674 4860 scope.go:117] "RemoveContainer" containerID="76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69" Jan 21 21:40:01 crc kubenswrapper[4860]: E0121 21:40:01.235631 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69\": container with ID starting with 76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69 not found: ID does not exist" containerID="76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.235682 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69"} err="failed to get container status \"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69\": rpc error: code = NotFound desc = could not find container \"76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69\": container with ID starting with 76ac7fd64d3a7aa069399ec3525540d9cb4edbed363a1e7c3045d74e1aee6c69 not found: ID does not exist" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.269395 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.284792 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.819304 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-j67mm"] Jan 21 21:40:01 crc kubenswrapper[4860]: E0121 21:40:01.820201 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ed7e08-65d4-4e78-8f05-3974349dc260" containerName="watcher-decision-engine" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.820737 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ed7e08-65d4-4e78-8f05-3974349dc260" containerName="watcher-decision-engine" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.820981 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ed7e08-65d4-4e78-8f05-3974349dc260" containerName="watcher-decision-engine" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.821837 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.842977 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-a25b-account-create-update-z2swc"] Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.844652 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.848773 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.855072 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j67mm"] Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.866184 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-a25b-account-create-update-z2swc"] Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.935573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddcw\" (UniqueName: \"kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.935648 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.935708 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:01 crc kubenswrapper[4860]: I0121 21:40:01.935755 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwcdd\" (UniqueName: \"kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.038038 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddcw\" (UniqueName: \"kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.038098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.038160 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.038209 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwcdd\" (UniqueName: \"kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.039533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.039799 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.063878 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwcdd\" (UniqueName: \"kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd\") pod \"watcher-db-create-j67mm\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.068196 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddcw\" (UniqueName: \"kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw\") pod \"watcher-a25b-account-create-update-z2swc\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.145214 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.167876 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.226330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerStarted","Data":"799f0ca7d4b0e6cb05718ad3158d8b0555759970232d63317a4aedc2686fd103"} Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.618105 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ed7e08-65d4-4e78-8f05-3974349dc260" path="/var/lib/kubelet/pods/51ed7e08-65d4-4e78-8f05-3974349dc260/volumes" Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.801812 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j67mm"] Jan 21 21:40:02 crc kubenswrapper[4860]: W0121 21:40:02.854458 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bbcbb6f_2445_4dcd_8530_32b068ce64a5.slice/crio-cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b WatchSource:0}: Error finding container cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b: Status 404 returned error can't find the container with id cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b Jan 21 21:40:02 crc kubenswrapper[4860]: I0121 21:40:02.928426 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-a25b-account-create-update-z2swc"] Jan 21 21:40:02 crc kubenswrapper[4860]: W0121 21:40:02.954386 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc20ca9f_a742_40b7_b242_de037cc7f509.slice/crio-41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec WatchSource:0}: Error finding container 41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec: Status 404 returned error can't find the container with id 41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.253725 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerStarted","Data":"c7cd463a376b8fa6de67e1c5a880b3b083f2e2a044cb4ab40d97cd52f06e5354"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.253788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerStarted","Data":"41f2b3848e6d14b309a328eaefc6a97d936963f3080e2b43b9bacb53fc9070bb"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.256225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j67mm" event={"ID":"0bbcbb6f-2445-4dcd-8530-32b068ce64a5","Type":"ContainerStarted","Data":"ced8aa5acf2426673c848b571b32fb429e700f6cae49453fafa37f84e8c87c94"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.256259 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j67mm" event={"ID":"0bbcbb6f-2445-4dcd-8530-32b068ce64a5","Type":"ContainerStarted","Data":"cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.260022 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" event={"ID":"dc20ca9f-a742-40b7-b242-de037cc7f509","Type":"ContainerStarted","Data":"07696da3ca0ec4b70ee33d819385b06267103c2cf49f09c5432b0fa427e3ede0"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.260073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" event={"ID":"dc20ca9f-a742-40b7-b242-de037cc7f509","Type":"ContainerStarted","Data":"41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec"} Jan 21 21:40:03 crc kubenswrapper[4860]: I0121 21:40:03.282798 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-j67mm" podStartSLOduration=2.282770499 podStartE2EDuration="2.282770499s" podCreationTimestamp="2026-01-21 21:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:03.278962591 +0000 UTC m=+1895.501141071" watchObservedRunningTime="2026-01-21 21:40:03.282770499 +0000 UTC m=+1895.504948969" Jan 21 21:40:04 crc kubenswrapper[4860]: I0121 21:40:04.273712 4860 generic.go:334] "Generic (PLEG): container finished" podID="dc20ca9f-a742-40b7-b242-de037cc7f509" containerID="07696da3ca0ec4b70ee33d819385b06267103c2cf49f09c5432b0fa427e3ede0" exitCode=0 Jan 21 21:40:04 crc kubenswrapper[4860]: I0121 21:40:04.273814 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" event={"ID":"dc20ca9f-a742-40b7-b242-de037cc7f509","Type":"ContainerDied","Data":"07696da3ca0ec4b70ee33d819385b06267103c2cf49f09c5432b0fa427e3ede0"} Jan 21 21:40:04 crc kubenswrapper[4860]: I0121 21:40:04.277314 4860 generic.go:334] "Generic (PLEG): container finished" podID="0bbcbb6f-2445-4dcd-8530-32b068ce64a5" containerID="ced8aa5acf2426673c848b571b32fb429e700f6cae49453fafa37f84e8c87c94" exitCode=0 Jan 21 21:40:04 crc kubenswrapper[4860]: I0121 21:40:04.277373 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j67mm" event={"ID":"0bbcbb6f-2445-4dcd-8530-32b068ce64a5","Type":"ContainerDied","Data":"ced8aa5acf2426673c848b571b32fb429e700f6cae49453fafa37f84e8c87c94"} Jan 21 21:40:05 crc kubenswrapper[4860]: I0121 21:40:05.316697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerStarted","Data":"58e58897c6b9326584a83de568687e3019048abccfbd53322bcec9bea557e42b"} Jan 21 21:40:05 crc kubenswrapper[4860]: I0121 21:40:05.847964 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:05 crc kubenswrapper[4860]: I0121 21:40:05.856315 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:05 crc kubenswrapper[4860]: I0121 21:40:05.867021 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=3.269900554 podStartE2EDuration="6.8669809s" podCreationTimestamp="2026-01-21 21:39:59 +0000 UTC" firstStartedPulling="2026-01-21 21:40:00.738378212 +0000 UTC m=+1892.960556682" lastFinishedPulling="2026-01-21 21:40:04.335458558 +0000 UTC m=+1896.557637028" observedRunningTime="2026-01-21 21:40:05.345258298 +0000 UTC m=+1897.567436778" watchObservedRunningTime="2026-01-21 21:40:05.8669809 +0000 UTC m=+1898.089159380" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.015049 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts\") pod \"dc20ca9f-a742-40b7-b242-de037cc7f509\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.015613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts\") pod \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.015652 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwcdd\" (UniqueName: \"kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd\") pod \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\" (UID: \"0bbcbb6f-2445-4dcd-8530-32b068ce64a5\") " Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.015684 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddcw\" (UniqueName: \"kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw\") pod \"dc20ca9f-a742-40b7-b242-de037cc7f509\" (UID: \"dc20ca9f-a742-40b7-b242-de037cc7f509\") " Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.016082 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc20ca9f-a742-40b7-b242-de037cc7f509" (UID: "dc20ca9f-a742-40b7-b242-de037cc7f509"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.016362 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc20ca9f-a742-40b7-b242-de037cc7f509-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.016537 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bbcbb6f-2445-4dcd-8530-32b068ce64a5" (UID: "0bbcbb6f-2445-4dcd-8530-32b068ce64a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.036748 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd" (OuterVolumeSpecName: "kube-api-access-bwcdd") pod "0bbcbb6f-2445-4dcd-8530-32b068ce64a5" (UID: "0bbcbb6f-2445-4dcd-8530-32b068ce64a5"). InnerVolumeSpecName "kube-api-access-bwcdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.036842 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw" (OuterVolumeSpecName: "kube-api-access-5ddcw") pod "dc20ca9f-a742-40b7-b242-de037cc7f509" (UID: "dc20ca9f-a742-40b7-b242-de037cc7f509"). InnerVolumeSpecName "kube-api-access-5ddcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.118644 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.118693 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwcdd\" (UniqueName: \"kubernetes.io/projected/0bbcbb6f-2445-4dcd-8530-32b068ce64a5-kube-api-access-bwcdd\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.118706 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ddcw\" (UniqueName: \"kubernetes.io/projected/dc20ca9f-a742-40b7-b242-de037cc7f509-kube-api-access-5ddcw\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.328969 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-j67mm" event={"ID":"0bbcbb6f-2445-4dcd-8530-32b068ce64a5","Type":"ContainerDied","Data":"cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b"} Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.329029 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7f00d51f3262d6fa6f3a5f1cd642e10982930b6d6630aedc25a45eb2f8183b" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.329098 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-j67mm" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.344219 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.344267 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a25b-account-create-update-z2swc" event={"ID":"dc20ca9f-a742-40b7-b242-de037cc7f509","Type":"ContainerDied","Data":"41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec"} Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.344331 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d7e8925776561b44d3fe9013f85c4c3cb218a8d1571f3bed8ffbba34f527ec" Jan 21 21:40:06 crc kubenswrapper[4860]: I0121 21:40:06.344447 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.476869 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9"] Jan 21 21:40:07 crc kubenswrapper[4860]: E0121 21:40:07.480465 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc20ca9f-a742-40b7-b242-de037cc7f509" containerName="mariadb-account-create-update" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.480560 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc20ca9f-a742-40b7-b242-de037cc7f509" containerName="mariadb-account-create-update" Jan 21 21:40:07 crc kubenswrapper[4860]: E0121 21:40:07.480625 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbcbb6f-2445-4dcd-8530-32b068ce64a5" containerName="mariadb-database-create" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.480679 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbcbb6f-2445-4dcd-8530-32b068ce64a5" containerName="mariadb-database-create" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.481012 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc20ca9f-a742-40b7-b242-de037cc7f509" containerName="mariadb-account-create-update" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.481118 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbcbb6f-2445-4dcd-8530-32b068ce64a5" containerName="mariadb-database-create" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.481925 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.487788 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9"] Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.490527 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zdxgs" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.490651 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.501619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.501718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.501745 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.501820 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjf5\" (UniqueName: \"kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.603347 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.604002 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.605068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjf5\" (UniqueName: \"kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.605210 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.612019 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.612352 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.612832 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.623917 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjf5\" (UniqueName: \"kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5\") pod \"watcher-kuttl-db-sync-kvpj9\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:07 crc kubenswrapper[4860]: I0121 21:40:07.804246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:08 crc kubenswrapper[4860]: I0121 21:40:08.345041 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9"] Jan 21 21:40:08 crc kubenswrapper[4860]: I0121 21:40:08.455527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" event={"ID":"a3d80256-acbd-42e1-9e23-4ab10f79f38f","Type":"ContainerStarted","Data":"497d5d14e20c273d61263f454d66026013d2a7c3287374409c974c368b1260bd"} Jan 21 21:40:09 crc kubenswrapper[4860]: I0121 21:40:09.470848 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" event={"ID":"a3d80256-acbd-42e1-9e23-4ab10f79f38f","Type":"ContainerStarted","Data":"75f8162f49ca4a6c3802a1e7b0f922ba950acc41d2202fe973322248b4ff08c7"} Jan 21 21:40:09 crc kubenswrapper[4860]: I0121 21:40:09.520971 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" podStartSLOduration=2.520918266 podStartE2EDuration="2.520918266s" podCreationTimestamp="2026-01-21 21:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:09.499293176 +0000 UTC m=+1901.721471646" watchObservedRunningTime="2026-01-21 21:40:09.520918266 +0000 UTC m=+1901.743096746" Jan 21 21:40:12 crc kubenswrapper[4860]: I0121 21:40:12.502517 4860 generic.go:334] "Generic (PLEG): container finished" podID="a3d80256-acbd-42e1-9e23-4ab10f79f38f" containerID="75f8162f49ca4a6c3802a1e7b0f922ba950acc41d2202fe973322248b4ff08c7" exitCode=0 Jan 21 21:40:12 crc kubenswrapper[4860]: I0121 21:40:12.502607 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" event={"ID":"a3d80256-acbd-42e1-9e23-4ab10f79f38f","Type":"ContainerDied","Data":"75f8162f49ca4a6c3802a1e7b0f922ba950acc41d2202fe973322248b4ff08c7"} Jan 21 21:40:13 crc kubenswrapper[4860]: I0121 21:40:13.965457 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.089328 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data\") pod \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.089593 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjf5\" (UniqueName: \"kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5\") pod \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.089641 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data\") pod \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.089717 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle\") pod \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\" (UID: \"a3d80256-acbd-42e1-9e23-4ab10f79f38f\") " Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.099436 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a3d80256-acbd-42e1-9e23-4ab10f79f38f" (UID: "a3d80256-acbd-42e1-9e23-4ab10f79f38f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.099583 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5" (OuterVolumeSpecName: "kube-api-access-zgjf5") pod "a3d80256-acbd-42e1-9e23-4ab10f79f38f" (UID: "a3d80256-acbd-42e1-9e23-4ab10f79f38f"). InnerVolumeSpecName "kube-api-access-zgjf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.117254 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d80256-acbd-42e1-9e23-4ab10f79f38f" (UID: "a3d80256-acbd-42e1-9e23-4ab10f79f38f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.141228 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data" (OuterVolumeSpecName: "config-data") pod "a3d80256-acbd-42e1-9e23-4ab10f79f38f" (UID: "a3d80256-acbd-42e1-9e23-4ab10f79f38f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.191907 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.191973 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.191988 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3d80256-acbd-42e1-9e23-4ab10f79f38f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.192001 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjf5\" (UniqueName: \"kubernetes.io/projected/a3d80256-acbd-42e1-9e23-4ab10f79f38f-kube-api-access-zgjf5\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.526696 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" event={"ID":"a3d80256-acbd-42e1-9e23-4ab10f79f38f","Type":"ContainerDied","Data":"497d5d14e20c273d61263f454d66026013d2a7c3287374409c974c368b1260bd"} Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.526770 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497d5d14e20c273d61263f454d66026013d2a7c3287374409c974c368b1260bd" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.526876 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.853260 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:14 crc kubenswrapper[4860]: E0121 21:40:14.853883 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d80256-acbd-42e1-9e23-4ab10f79f38f" containerName="watcher-kuttl-db-sync" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.853911 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d80256-acbd-42e1-9e23-4ab10f79f38f" containerName="watcher-kuttl-db-sync" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.854188 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d80256-acbd-42e1-9e23-4ab10f79f38f" containerName="watcher-kuttl-db-sync" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.855399 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.863847 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.865263 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.866009 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.870794 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zdxgs" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.890951 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906640 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906679 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906745 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.906775 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9pl\" (UniqueName: \"kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907271 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79slc\" (UniqueName: \"kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907355 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907391 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.907623 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.910832 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.937018 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.938592 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.941244 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:40:14 crc kubenswrapper[4860]: I0121 21:40:14.967675 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011144 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011248 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011349 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011404 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9pl\" (UniqueName: \"kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpb4\" (UniqueName: \"kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011502 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011529 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011568 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011597 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011629 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79slc\" (UniqueName: \"kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011658 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011675 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011722 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.011749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.012651 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.013745 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.018941 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.020565 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.025988 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.026959 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.028140 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.028246 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.028295 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.028826 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.029602 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.032911 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.051609 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.081996 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79slc\" (UniqueName: \"kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc\") pod \"watcher-kuttl-api-1\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.091347 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.099803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9pl\" (UniqueName: \"kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl\") pod \"watcher-kuttl-api-0\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113165 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113231 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113327 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113379 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113409 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113436 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113466 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqk7\" (UniqueName: \"kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113519 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpb4\" (UniqueName: \"kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113574 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113605 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.113628 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.114695 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.119325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.129102 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.140339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.145662 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpb4\" (UniqueName: \"kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4\") pod \"watcher-kuttl-applier-0\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.197180 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.212907 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214576 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqk7\" (UniqueName: \"kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214684 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214708 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214734 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.214823 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.215282 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.227832 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.229895 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.231085 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.237244 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.241583 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqk7\" (UniqueName: \"kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.271510 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.513521 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.816621 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:15 crc kubenswrapper[4860]: W0121 21:40:15.821674 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8354c729_beee_4a94_9e6a_50095582c1a9.slice/crio-8f1f7ef2dbdda350571d26a51b3c7cafcc4b95b69798b29a2a719a8395112fef WatchSource:0}: Error finding container 8f1f7ef2dbdda350571d26a51b3c7cafcc4b95b69798b29a2a719a8395112fef: Status 404 returned error can't find the container with id 8f1f7ef2dbdda350571d26a51b3c7cafcc4b95b69798b29a2a719a8395112fef Jan 21 21:40:15 crc kubenswrapper[4860]: I0121 21:40:15.960189 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:15 crc kubenswrapper[4860]: W0121 21:40:15.978175 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae543dcb_2178_4d58_bc18_d5d56b268598.slice/crio-fdac4127e8dba9d97ddccfa0aff031211b2bd8d17fe144fc0a2916d370b0888b WatchSource:0}: Error finding container fdac4127e8dba9d97ddccfa0aff031211b2bd8d17fe144fc0a2916d370b0888b: Status 404 returned error can't find the container with id fdac4127e8dba9d97ddccfa0aff031211b2bd8d17fe144fc0a2916d370b0888b Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.065107 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:40:16 crc kubenswrapper[4860]: W0121 21:40:16.094014 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3203815_18f3_4de0_9887_a921d5f309d3.slice/crio-298f885270bb0c71e26f431e5bf72567cde5114f9012d53cff47536dbeb3d402 WatchSource:0}: Error finding container 298f885270bb0c71e26f431e5bf72567cde5114f9012d53cff47536dbeb3d402: Status 404 returned error can't find the container with id 298f885270bb0c71e26f431e5bf72567cde5114f9012d53cff47536dbeb3d402 Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.312000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.642968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerStarted","Data":"fdac4127e8dba9d97ddccfa0aff031211b2bd8d17fe144fc0a2916d370b0888b"} Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.643631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a3203815-18f3-4de0-9887-a921d5f309d3","Type":"ContainerStarted","Data":"298f885270bb0c71e26f431e5bf72567cde5114f9012d53cff47536dbeb3d402"} Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.643752 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3d3acbc7-567c-4f78-a33b-673e5c6b831a","Type":"ContainerStarted","Data":"c37bbf44fcd14ae54c5093d23cd907b6b2f78e67d40bf3945c7e764c4ee4365d"} Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.669016 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerStarted","Data":"763e65087312b27f39e74190dfad2d864d5b682fb9c94482430c1a1b71f32a06"} Jan 21 21:40:16 crc kubenswrapper[4860]: I0121 21:40:16.669090 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerStarted","Data":"8f1f7ef2dbdda350571d26a51b3c7cafcc4b95b69798b29a2a719a8395112fef"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.680885 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a3203815-18f3-4de0-9887-a921d5f309d3","Type":"ContainerStarted","Data":"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.683247 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3d3acbc7-567c-4f78-a33b-673e5c6b831a","Type":"ContainerStarted","Data":"772b48e124d222cf711b1e5ea1a5f169211f24a0166d495e01711c6493ee4ef4"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.685495 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerStarted","Data":"e9d24fcd5f7417cd2d28ee3ea00b4b4afc282eafd1564859e962251645abd87c"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.685718 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.689020 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerStarted","Data":"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.689219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerStarted","Data":"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43"} Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.689794 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.693397 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.204:9322/\": dial tcp 10.217.0.204:9322: connect: connection refused" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.723299 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.723252678 podStartE2EDuration="3.723252678s" podCreationTimestamp="2026-01-21 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:17.712572568 +0000 UTC m=+1909.934751058" watchObservedRunningTime="2026-01-21 21:40:17.723252678 +0000 UTC m=+1909.945431158" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.749215 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.749176821 podStartE2EDuration="3.749176821s" podCreationTimestamp="2026-01-21 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:17.735651101 +0000 UTC m=+1909.957829571" watchObservedRunningTime="2026-01-21 21:40:17.749176821 +0000 UTC m=+1909.971355291" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.772389 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.772340267 podStartE2EDuration="3.772340267s" podCreationTimestamp="2026-01-21 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:17.764831905 +0000 UTC m=+1909.987010395" watchObservedRunningTime="2026-01-21 21:40:17.772340267 +0000 UTC m=+1909.994518737" Jan 21 21:40:17 crc kubenswrapper[4860]: I0121 21:40:17.836821 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=3.8367759120000002 podStartE2EDuration="3.836775912s" podCreationTimestamp="2026-01-21 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:17.791993386 +0000 UTC m=+1910.014171886" watchObservedRunningTime="2026-01-21 21:40:17.836775912 +0000 UTC m=+1910.058954402" Jan 21 21:40:20 crc kubenswrapper[4860]: I0121 21:40:20.197325 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:20 crc kubenswrapper[4860]: I0121 21:40:20.213572 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:20 crc kubenswrapper[4860]: I0121 21:40:20.214014 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:40:20 crc kubenswrapper[4860]: I0121 21:40:20.272625 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:20 crc kubenswrapper[4860]: I0121 21:40:20.465273 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:21 crc kubenswrapper[4860]: I0121 21:40:21.241797 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.079128 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-kl96t"] Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.086396 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-9nb9k"] Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.096291 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-9nb9k"] Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.103318 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-kl96t"] Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.592244 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050d336c-1842-498d-aa18-411b57a080eb" path="/var/lib/kubelet/pods/050d336c-1842-498d-aa18-411b57a080eb/volumes" Jan 21 21:40:24 crc kubenswrapper[4860]: I0121 21:40:24.593201 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcab561-13de-4aa9-b176-f82be46c8107" path="/var/lib/kubelet/pods/abcab561-13de-4aa9-b176-f82be46c8107/volumes" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.198146 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.202563 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.213736 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.239213 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.272116 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.300495 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.515615 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.542633 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.784219 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.789164 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.789403 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.820671 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:25 crc kubenswrapper[4860]: I0121 21:40:25.828839 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:40:26 crc kubenswrapper[4860]: I0121 21:40:26.042064 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj"] Jan 21 21:40:26 crc kubenswrapper[4860]: I0121 21:40:26.052355 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-68ee-account-create-update-6j6xj"] Jan 21 21:40:26 crc kubenswrapper[4860]: I0121 21:40:26.596109 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16" path="/var/lib/kubelet/pods/b3c4fc92-8c98-4b54-8dcc-dd9b13e05b16/volumes" Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.631798 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.632763 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="sg-core" containerID="cri-o://41f2b3848e6d14b309a328eaefc6a97d936963f3080e2b43b9bacb53fc9070bb" gracePeriod=30 Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.632922 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-notification-agent" containerID="cri-o://c7cd463a376b8fa6de67e1c5a880b3b083f2e2a044cb4ab40d97cd52f06e5354" gracePeriod=30 Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.632970 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-central-agent" containerID="cri-o://799f0ca7d4b0e6cb05718ad3158d8b0555759970232d63317a4aedc2686fd103" gracePeriod=30 Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.633373 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" containerID="cri-o://58e58897c6b9326584a83de568687e3019048abccfbd53322bcec9bea557e42b" gracePeriod=30 Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.644986 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.200:3000/\": read tcp 10.217.0.2:45528->10.217.0.200:3000: read: connection reset by peer" Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.810585 4860 generic.go:334] "Generic (PLEG): container finished" podID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerID="41f2b3848e6d14b309a328eaefc6a97d936963f3080e2b43b9bacb53fc9070bb" exitCode=2 Jan 21 21:40:27 crc kubenswrapper[4860]: I0121 21:40:27.810660 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerDied","Data":"41f2b3848e6d14b309a328eaefc6a97d936963f3080e2b43b9bacb53fc9070bb"} Jan 21 21:40:28 crc kubenswrapper[4860]: I0121 21:40:28.826240 4860 generic.go:334] "Generic (PLEG): container finished" podID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerID="58e58897c6b9326584a83de568687e3019048abccfbd53322bcec9bea557e42b" exitCode=0 Jan 21 21:40:28 crc kubenswrapper[4860]: I0121 21:40:28.826313 4860 generic.go:334] "Generic (PLEG): container finished" podID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerID="799f0ca7d4b0e6cb05718ad3158d8b0555759970232d63317a4aedc2686fd103" exitCode=0 Jan 21 21:40:28 crc kubenswrapper[4860]: I0121 21:40:28.826340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerDied","Data":"58e58897c6b9326584a83de568687e3019048abccfbd53322bcec9bea557e42b"} Jan 21 21:40:28 crc kubenswrapper[4860]: I0121 21:40:28.826438 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerDied","Data":"799f0ca7d4b0e6cb05718ad3158d8b0555759970232d63317a4aedc2686fd103"} Jan 21 21:40:30 crc kubenswrapper[4860]: I0121 21:40:30.200870 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.200:3000/\": dial tcp 10.217.0.200:3000: connect: connection refused" Jan 21 21:40:31 crc kubenswrapper[4860]: E0121 21:40:31.739886 4860 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 21 21:40:31 crc kubenswrapper[4860]: I0121 21:40:31.869564 4860 generic.go:334] "Generic (PLEG): container finished" podID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerID="c7cd463a376b8fa6de67e1c5a880b3b083f2e2a044cb4ab40d97cd52f06e5354" exitCode=0 Jan 21 21:40:31 crc kubenswrapper[4860]: I0121 21:40:31.869633 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerDied","Data":"c7cd463a376b8fa6de67e1c5a880b3b083f2e2a044cb4ab40d97cd52f06e5354"} Jan 21 21:40:31 crc kubenswrapper[4860]: I0121 21:40:31.938783 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.101909 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102009 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102139 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrz8k\" (UniqueName: \"kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102158 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102237 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102367 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.102394 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle\") pod \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\" (UID: \"5e34a2e8-76ed-4064-a32d-26e4c5e01c20\") " Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.105375 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.105947 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.127494 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k" (OuterVolumeSpecName: "kube-api-access-nrz8k") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "kube-api-access-nrz8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.127519 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts" (OuterVolumeSpecName: "scripts") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.169296 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.180875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.198586 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.205968 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206009 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206021 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206029 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206043 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrz8k\" (UniqueName: \"kubernetes.io/projected/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-kube-api-access-nrz8k\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206053 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.206063 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.227769 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data" (OuterVolumeSpecName: "config-data") pod "5e34a2e8-76ed-4064-a32d-26e4c5e01c20" (UID: "5e34a2e8-76ed-4064-a32d-26e4c5e01c20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.308459 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e34a2e8-76ed-4064-a32d-26e4c5e01c20-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.883752 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5e34a2e8-76ed-4064-a32d-26e4c5e01c20","Type":"ContainerDied","Data":"8b54c3c64a83dd7e8985ae4dbaef01db52e27b6a15750b737d9b2e6fd0619f2a"} Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.884219 4860 scope.go:117] "RemoveContainer" containerID="58e58897c6b9326584a83de568687e3019048abccfbd53322bcec9bea557e42b" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.883821 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.908063 4860 scope.go:117] "RemoveContainer" containerID="41f2b3848e6d14b309a328eaefc6a97d936963f3080e2b43b9bacb53fc9070bb" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.917366 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.935059 4860 scope.go:117] "RemoveContainer" containerID="c7cd463a376b8fa6de67e1c5a880b3b083f2e2a044cb4ab40d97cd52f06e5354" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.940373 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.955413 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:32 crc kubenswrapper[4860]: E0121 21:40:32.956157 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956187 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" Jan 21 21:40:32 crc kubenswrapper[4860]: E0121 21:40:32.956204 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-notification-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956212 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-notification-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: E0121 21:40:32.956233 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="sg-core" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956241 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="sg-core" Jan 21 21:40:32 crc kubenswrapper[4860]: E0121 21:40:32.956257 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-central-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956265 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-central-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956542 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="proxy-httpd" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956558 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-central-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956575 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="ceilometer-notification-agent" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.956590 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" containerName="sg-core" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.960458 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.963531 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.966058 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.966330 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.966517 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:40:32 crc kubenswrapper[4860]: I0121 21:40:32.980284 4860 scope.go:117] "RemoveContainer" containerID="799f0ca7d4b0e6cb05718ad3158d8b0555759970232d63317a4aedc2686fd103" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.122893 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123022 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123055 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123090 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123140 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123165 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfbb\" (UniqueName: \"kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123188 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.123208 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225328 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225477 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225524 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225587 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfbb\" (UniqueName: \"kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225674 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.225702 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.226328 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.227130 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.232363 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.232441 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.232790 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.236499 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.245166 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.264858 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfbb\" (UniqueName: \"kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb\") pod \"ceilometer-0\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:33 crc kubenswrapper[4860]: I0121 21:40:33.297666 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:34 crc kubenswrapper[4860]: I0121 21:40:34.023068 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:34 crc kubenswrapper[4860]: I0121 21:40:34.608501 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e34a2e8-76ed-4064-a32d-26e4c5e01c20" path="/var/lib/kubelet/pods/5e34a2e8-76ed-4064-a32d-26e4c5e01c20/volumes" Jan 21 21:40:34 crc kubenswrapper[4860]: I0121 21:40:34.925095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerStarted","Data":"dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4"} Jan 21 21:40:34 crc kubenswrapper[4860]: I0121 21:40:34.925514 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerStarted","Data":"b0fbfb4090ab60d4528a8df28ce32b1de2dbd365b0fdf0ce4025216340c13e3a"} Jan 21 21:40:35 crc kubenswrapper[4860]: I0121 21:40:35.941620 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerStarted","Data":"30e974f40c4a96845e4d5dfc91b4cf60c10f4ae4ba7ba92a97c8236f3996b827"} Jan 21 21:40:36 crc kubenswrapper[4860]: I0121 21:40:36.955814 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerStarted","Data":"4b7adae04519d69aff978c864e418e9a1ffc7130f1ea9b9f0cc278091da2d40b"} Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.527192 4860 scope.go:117] "RemoveContainer" containerID="a886018558cdff59514cfc207e4da819eec8f58d00003200d04086e0558f197d" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.618357 4860 scope.go:117] "RemoveContainer" containerID="a773c7d784ca665d5962c6c70d34a7d437b030dce8875e4fb436e3826e44a9df" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.664283 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.666125 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.670465 4860 scope.go:117] "RemoveContainer" containerID="2ade1f595062cbadefc1775970b79d488ebd925d2f0716232dad6693c0d1f1fc" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.683048 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.748861 4860 scope.go:117] "RemoveContainer" containerID="a1ed4408268c4cab543ece235e91c256c7e59e9440b42c7297fd789deb71d16e" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.858659 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.859082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.859111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgrq\" (UniqueName: \"kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.859194 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.859266 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.859285 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:37 crc kubenswrapper[4860]: I0121 21:40:37.913221 4860 scope.go:117] "RemoveContainer" containerID="c081d5c9262a9ac1caf9fd9368718efc6ac592af7f3d8b29611ed2510b8ad0db" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004540 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004573 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004625 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.004675 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgrq\" (UniqueName: \"kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.008623 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.012050 4860 scope.go:117] "RemoveContainer" containerID="4e387828e31e1e691b60df2d27d0439e678d4e4a0274734ec1de57e3d2bd4ca2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.017666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.027870 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.028387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.037753 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.044800 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgrq\" (UniqueName: \"kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq\") pod \"watcher-kuttl-api-2\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.065439 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerStarted","Data":"69374e3f725b3c8ea506cf437e466698a06c34bde306cbcb8705efa8f5a9a70f"} Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.067115 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.079079 4860 scope.go:117] "RemoveContainer" containerID="86bd2cf47dd7cae3582f59870a214b26a7555ae5ef064b240a003470f9ba2a6a" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.103587 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=3.068054896 podStartE2EDuration="6.103555257s" podCreationTimestamp="2026-01-21 21:40:32 +0000 UTC" firstStartedPulling="2026-01-21 21:40:34.061176806 +0000 UTC m=+1926.283355276" lastFinishedPulling="2026-01-21 21:40:37.096677167 +0000 UTC m=+1929.318855637" observedRunningTime="2026-01-21 21:40:38.095863729 +0000 UTC m=+1930.318042209" watchObservedRunningTime="2026-01-21 21:40:38.103555257 +0000 UTC m=+1930.325733727" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.116638 4860 scope.go:117] "RemoveContainer" containerID="358db26bed2e6a3e77a7308da8d7aa133241c2480b5a3e0bffbcb04012546a22" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.155058 4860 scope.go:117] "RemoveContainer" containerID="dc5c685ee1d3d36d41163c540ec271c882f501aa00b5c3db55708809fead6568" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.201209 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:38 crc kubenswrapper[4860]: I0121 21:40:38.802615 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:39 crc kubenswrapper[4860]: I0121 21:40:39.127906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerStarted","Data":"db35b463b36385ad43697f7507ec7a3fe79d0aae03a649cec55cb00f00de36c2"} Jan 21 21:40:39 crc kubenswrapper[4860]: I0121 21:40:39.129663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerStarted","Data":"faca7f0c6dfaa1ba6ffe21de5fe15fbd60c32470c002e4503c60173409eb264d"} Jan 21 21:40:40 crc kubenswrapper[4860]: I0121 21:40:40.140223 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerStarted","Data":"278af58e48db73733fc261ca0d7894582b91fa3b83e893d4e6e6dd80ce40fc4a"} Jan 21 21:40:40 crc kubenswrapper[4860]: I0121 21:40:40.140504 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:40 crc kubenswrapper[4860]: I0121 21:40:40.180430 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=3.18040043 podStartE2EDuration="3.18040043s" podCreationTimestamp="2026-01-21 21:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:40.164060375 +0000 UTC m=+1932.386238865" watchObservedRunningTime="2026-01-21 21:40:40.18040043 +0000 UTC m=+1932.402578900" Jan 21 21:40:42 crc kubenswrapper[4860]: I0121 21:40:42.665215 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:43 crc kubenswrapper[4860]: I0121 21:40:43.201735 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:48 crc kubenswrapper[4860]: I0121 21:40:48.201887 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:48 crc kubenswrapper[4860]: I0121 21:40:48.378063 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:48 crc kubenswrapper[4860]: I0121 21:40:48.384383 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:49 crc kubenswrapper[4860]: I0121 21:40:49.291004 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:49 crc kubenswrapper[4860]: I0121 21:40:49.299840 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:49 crc kubenswrapper[4860]: I0121 21:40:49.300173 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-kuttl-api-log" containerID="cri-o://763e65087312b27f39e74190dfad2d864d5b682fb9c94482430c1a1b71f32a06" gracePeriod=30 Jan 21 21:40:49 crc kubenswrapper[4860]: I0121 21:40:49.300352 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-api" containerID="cri-o://e9d24fcd5f7417cd2d28ee3ea00b4b4afc282eafd1564859e962251645abd87c" gracePeriod=30 Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.213906 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.205:9322/\": dial tcp 10.217.0.205:9322: connect: connection refused" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.213960 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.205:9322/\": dial tcp 10.217.0.205:9322: connect: connection refused" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.246040 4860 generic.go:334] "Generic (PLEG): container finished" podID="8354c729-beee-4a94-9e6a-50095582c1a9" containerID="e9d24fcd5f7417cd2d28ee3ea00b4b4afc282eafd1564859e962251645abd87c" exitCode=0 Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.246096 4860 generic.go:334] "Generic (PLEG): container finished" podID="8354c729-beee-4a94-9e6a-50095582c1a9" containerID="763e65087312b27f39e74190dfad2d864d5b682fb9c94482430c1a1b71f32a06" exitCode=143 Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.246389 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-kuttl-api-log" containerID="cri-o://db35b463b36385ad43697f7507ec7a3fe79d0aae03a649cec55cb00f00de36c2" gracePeriod=30 Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.246782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerDied","Data":"e9d24fcd5f7417cd2d28ee3ea00b4b4afc282eafd1564859e962251645abd87c"} Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.246828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerDied","Data":"763e65087312b27f39e74190dfad2d864d5b682fb9c94482430c1a1b71f32a06"} Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.247273 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-api" containerID="cri-o://278af58e48db73733fc261ca0d7894582b91fa3b83e893d4e6e6dd80ce40fc4a" gracePeriod=30 Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.671761 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.709500 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.709571 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.709812 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79slc\" (UniqueName: \"kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.709859 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.709916 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.710008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs\") pod \"8354c729-beee-4a94-9e6a-50095582c1a9\" (UID: \"8354c729-beee-4a94-9e6a-50095582c1a9\") " Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.712730 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs" (OuterVolumeSpecName: "logs") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.754686 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc" (OuterVolumeSpecName: "kube-api-access-79slc") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "kube-api-access-79slc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.788794 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.792717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.846079 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8354c729-beee-4a94-9e6a-50095582c1a9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.847098 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79slc\" (UniqueName: \"kubernetes.io/projected/8354c729-beee-4a94-9e6a-50095582c1a9-kube-api-access-79slc\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.847134 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.847150 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.850456 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.889307 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data" (OuterVolumeSpecName: "config-data") pod "8354c729-beee-4a94-9e6a-50095582c1a9" (UID: "8354c729-beee-4a94-9e6a-50095582c1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.949707 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:50 crc kubenswrapper[4860]: I0121 21:40:50.949746 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8354c729-beee-4a94-9e6a-50095582c1a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.261673 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8354c729-beee-4a94-9e6a-50095582c1a9","Type":"ContainerDied","Data":"8f1f7ef2dbdda350571d26a51b3c7cafcc4b95b69798b29a2a719a8395112fef"} Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.261761 4860 scope.go:117] "RemoveContainer" containerID="e9d24fcd5f7417cd2d28ee3ea00b4b4afc282eafd1564859e962251645abd87c" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.262864 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.270134 4860 generic.go:334] "Generic (PLEG): container finished" podID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerID="278af58e48db73733fc261ca0d7894582b91fa3b83e893d4e6e6dd80ce40fc4a" exitCode=0 Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.270191 4860 generic.go:334] "Generic (PLEG): container finished" podID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerID="db35b463b36385ad43697f7507ec7a3fe79d0aae03a649cec55cb00f00de36c2" exitCode=143 Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.270244 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerDied","Data":"278af58e48db73733fc261ca0d7894582b91fa3b83e893d4e6e6dd80ce40fc4a"} Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.270359 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerDied","Data":"db35b463b36385ad43697f7507ec7a3fe79d0aae03a649cec55cb00f00de36c2"} Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.323309 4860 scope.go:117] "RemoveContainer" containerID="763e65087312b27f39e74190dfad2d864d5b682fb9c94482430c1a1b71f32a06" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.325023 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.331406 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.460335 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.662729 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.662886 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svgrq\" (UniqueName: \"kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.662968 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.663082 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.663110 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.663166 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle\") pod \"0a465b55-b68b-4b99-8a30-46d5b3a89121\" (UID: \"0a465b55-b68b-4b99-8a30-46d5b3a89121\") " Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.664374 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs" (OuterVolumeSpecName: "logs") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.671994 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq" (OuterVolumeSpecName: "kube-api-access-svgrq") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "kube-api-access-svgrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.733173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.750228 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.769544 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.769956 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.769975 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svgrq\" (UniqueName: \"kubernetes.io/projected/0a465b55-b68b-4b99-8a30-46d5b3a89121-kube-api-access-svgrq\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.769988 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a465b55-b68b-4b99-8a30-46d5b3a89121-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.786658 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data" (OuterVolumeSpecName: "config-data") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.811473 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "0a465b55-b68b-4b99-8a30-46d5b3a89121" (UID: "0a465b55-b68b-4b99-8a30-46d5b3a89121"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.873412 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:51 crc kubenswrapper[4860]: I0121 21:40:51.873465 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a465b55-b68b-4b99-8a30-46d5b3a89121-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.283302 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"0a465b55-b68b-4b99-8a30-46d5b3a89121","Type":"ContainerDied","Data":"faca7f0c6dfaa1ba6ffe21de5fe15fbd60c32470c002e4503c60173409eb264d"} Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.283374 4860 scope.go:117] "RemoveContainer" containerID="278af58e48db73733fc261ca0d7894582b91fa3b83e893d4e6e6dd80ce40fc4a" Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.283381 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.324232 4860 scope.go:117] "RemoveContainer" containerID="db35b463b36385ad43697f7507ec7a3fe79d0aae03a649cec55cb00f00de36c2" Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.326314 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.336749 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.592445 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" path="/var/lib/kubelet/pods/0a465b55-b68b-4b99-8a30-46d5b3a89121/volumes" Jan 21 21:40:52 crc kubenswrapper[4860]: I0121 21:40:52.593418 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" path="/var/lib/kubelet/pods/8354c729-beee-4a94-9e6a-50095582c1a9/volumes" Jan 21 21:40:53 crc kubenswrapper[4860]: I0121 21:40:53.647100 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:53 crc kubenswrapper[4860]: I0121 21:40:53.647892 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-kuttl-api-log" containerID="cri-o://346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43" gracePeriod=30 Jan 21 21:40:53 crc kubenswrapper[4860]: I0121 21:40:53.648134 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-api" containerID="cri-o://dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3" gracePeriod=30 Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.324452 4860 generic.go:334] "Generic (PLEG): container finished" podID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerID="346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43" exitCode=143 Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.324541 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerDied","Data":"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43"} Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.879089 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9"] Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.881147 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kvpj9"] Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.925348 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchera25b-account-delete-rfcd4"] Jan 21 21:40:54 crc kubenswrapper[4860]: E0121 21:40:54.926124 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: E0121 21:40:54.926174 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926183 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: E0121 21:40:54.926202 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926209 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: E0121 21:40:54.926231 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926238 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926464 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926499 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a465b55-b68b-4b99-8a30-46d5b3a89121" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926509 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-api" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.926522 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8354c729-beee-4a94-9e6a-50095582c1a9" containerName="watcher-kuttl-api-log" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.927498 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:54 crc kubenswrapper[4860]: I0121 21:40:54.941154 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchera25b-account-delete-rfcd4"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.043219 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.050126 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" containerName="watcher-decision-engine" containerID="cri-o://772b48e124d222cf711b1e5ea1a5f169211f24a0166d495e01711c6493ee4ef4" gracePeriod=30 Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.064373 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwsc\" (UniqueName: \"kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.064590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.137608 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.138032 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" containerID="cri-o://6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" gracePeriod=30 Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.139488 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.168795 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.169039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwsc\" (UniqueName: \"kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.170391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.204412 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwsc\" (UniqueName: \"kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc\") pod \"watchera25b-account-delete-rfcd4\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270010 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270086 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270231 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9pl\" (UniqueName: \"kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270305 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270379 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.270424 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca\") pod \"ae543dcb-2178-4d58-bc18-d5d56b268598\" (UID: \"ae543dcb-2178-4d58-bc18-d5d56b268598\") " Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.273374 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs" (OuterVolumeSpecName: "logs") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.282655 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.285024 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl" (OuterVolumeSpecName: "kube-api-access-cl9pl") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "kube-api-access-cl9pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.285408 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.297445 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.297551 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.322174 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.322893 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.354772 4860 generic.go:334] "Generic (PLEG): container finished" podID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerID="dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3" exitCode=0 Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.354830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerDied","Data":"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3"} Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.354865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ae543dcb-2178-4d58-bc18-d5d56b268598","Type":"ContainerDied","Data":"fdac4127e8dba9d97ddccfa0aff031211b2bd8d17fe144fc0a2916d370b0888b"} Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.354884 4860 scope.go:117] "RemoveContainer" containerID="dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.355079 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.363112 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.366906 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data" (OuterVolumeSpecName: "config-data") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.377274 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.377612 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.377686 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.377746 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl9pl\" (UniqueName: \"kubernetes.io/projected/ae543dcb-2178-4d58-bc18-d5d56b268598-kube-api-access-cl9pl\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.377820 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae543dcb-2178-4d58-bc18-d5d56b268598-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.396979 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ae543dcb-2178-4d58-bc18-d5d56b268598" (UID: "ae543dcb-2178-4d58-bc18-d5d56b268598"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.422516 4860 scope.go:117] "RemoveContainer" containerID="346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.480228 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ae543dcb-2178-4d58-bc18-d5d56b268598-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.503320 4860 scope.go:117] "RemoveContainer" containerID="dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3" Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.504049 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3\": container with ID starting with dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3 not found: ID does not exist" containerID="dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.504124 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3"} err="failed to get container status \"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3\": rpc error: code = NotFound desc = could not find container \"dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3\": container with ID starting with dc550bf6b4655547dde5262d72ed477b04c544c6069fe9f6448f6a6d5c2c91a3 not found: ID does not exist" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.504178 4860 scope.go:117] "RemoveContainer" containerID="346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43" Jan 21 21:40:55 crc kubenswrapper[4860]: E0121 21:40:55.504726 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43\": container with ID starting with 346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43 not found: ID does not exist" containerID="346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.504773 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43"} err="failed to get container status \"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43\": rpc error: code = NotFound desc = could not find container \"346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43\": container with ID starting with 346a294491a5b89fd89a9624dab2447c488a4432b255cc08ea4581318d467d43 not found: ID does not exist" Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.774237 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.799654 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:40:55 crc kubenswrapper[4860]: I0121 21:40:55.983288 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchera25b-account-delete-rfcd4"] Jan 21 21:40:56 crc kubenswrapper[4860]: I0121 21:40:56.369554 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" event={"ID":"4a538954-eacf-46bc-b4b2-44baa813c19f","Type":"ContainerStarted","Data":"ed06255e5e46272974158e8054e08c71fb43dedec44434e07d0bd4ccb42f326f"} Jan 21 21:40:56 crc kubenswrapper[4860]: I0121 21:40:56.369997 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" event={"ID":"4a538954-eacf-46bc-b4b2-44baa813c19f","Type":"ContainerStarted","Data":"e4bf8c19518b5c9f53ef4d56f19e18c466a036dcecae6ad599fbfa5af0ef1efe"} Jan 21 21:40:56 crc kubenswrapper[4860]: I0121 21:40:56.397671 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" podStartSLOduration=2.397642447 podStartE2EDuration="2.397642447s" podCreationTimestamp="2026-01-21 21:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:40:56.390845347 +0000 UTC m=+1948.613023817" watchObservedRunningTime="2026-01-21 21:40:56.397642447 +0000 UTC m=+1948.619820917" Jan 21 21:40:56 crc kubenswrapper[4860]: I0121 21:40:56.591493 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d80256-acbd-42e1-9e23-4ab10f79f38f" path="/var/lib/kubelet/pods/a3d80256-acbd-42e1-9e23-4ab10f79f38f/volumes" Jan 21 21:40:56 crc kubenswrapper[4860]: I0121 21:40:56.592152 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" path="/var/lib/kubelet/pods/ae543dcb-2178-4d58-bc18-d5d56b268598/volumes" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.390974 4860 generic.go:334] "Generic (PLEG): container finished" podID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" containerID="772b48e124d222cf711b1e5ea1a5f169211f24a0166d495e01711c6493ee4ef4" exitCode=0 Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.391658 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3d3acbc7-567c-4f78-a33b-673e5c6b831a","Type":"ContainerDied","Data":"772b48e124d222cf711b1e5ea1a5f169211f24a0166d495e01711c6493ee4ef4"} Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.395379 4860 generic.go:334] "Generic (PLEG): container finished" podID="4a538954-eacf-46bc-b4b2-44baa813c19f" containerID="ed06255e5e46272974158e8054e08c71fb43dedec44434e07d0bd4ccb42f326f" exitCode=0 Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.395419 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" event={"ID":"4a538954-eacf-46bc-b4b2-44baa813c19f","Type":"ContainerDied","Data":"ed06255e5e46272974158e8054e08c71fb43dedec44434e07d0bd4ccb42f326f"} Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.562799 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.627999 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.628171 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqk7\" (UniqueName: \"kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.628245 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.628336 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.628376 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.628404 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls\") pod \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\" (UID: \"3d3acbc7-567c-4f78-a33b-673e5c6b831a\") " Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.630674 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs" (OuterVolumeSpecName: "logs") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.677413 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7" (OuterVolumeSpecName: "kube-api-access-8pqk7") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "kube-api-access-8pqk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.680126 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.698593 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.727305 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data" (OuterVolumeSpecName: "config-data") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.733459 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.733504 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.733519 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqk7\" (UniqueName: \"kubernetes.io/projected/3d3acbc7-567c-4f78-a33b-673e5c6b831a-kube-api-access-8pqk7\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.733527 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.733536 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d3acbc7-567c-4f78-a33b-673e5c6b831a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.771201 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3d3acbc7-567c-4f78-a33b-673e5c6b831a" (UID: "3d3acbc7-567c-4f78-a33b-673e5c6b831a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:40:57 crc kubenswrapper[4860]: I0121 21:40:57.835850 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3d3acbc7-567c-4f78-a33b-673e5c6b831a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.252087 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.252947 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-central-agent" containerID="cri-o://dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4" gracePeriod=30 Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.253039 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="proxy-httpd" containerID="cri-o://69374e3f725b3c8ea506cf437e466698a06c34bde306cbcb8705efa8f5a9a70f" gracePeriod=30 Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.253232 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="sg-core" containerID="cri-o://4b7adae04519d69aff978c864e418e9a1ffc7130f1ea9b9f0cc278091da2d40b" gracePeriod=30 Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.253242 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-notification-agent" containerID="cri-o://30e974f40c4a96845e4d5dfc91b4cf60c10f4ae4ba7ba92a97c8236f3996b827" gracePeriod=30 Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.262644 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.208:3000/\": EOF" Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.415670 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3d3acbc7-567c-4f78-a33b-673e5c6b831a","Type":"ContainerDied","Data":"c37bbf44fcd14ae54c5093d23cd907b6b2f78e67d40bf3945c7e764c4ee4365d"} Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.415702 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.415760 4860 scope.go:117] "RemoveContainer" containerID="772b48e124d222cf711b1e5ea1a5f169211f24a0166d495e01711c6493ee4ef4" Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.420964 4860 generic.go:334] "Generic (PLEG): container finished" podID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerID="4b7adae04519d69aff978c864e418e9a1ffc7130f1ea9b9f0cc278091da2d40b" exitCode=2 Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.421017 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerDied","Data":"4b7adae04519d69aff978c864e418e9a1ffc7130f1ea9b9f0cc278091da2d40b"} Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.464829 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.476540 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.592176 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" path="/var/lib/kubelet/pods/3d3acbc7-567c-4f78-a33b-673e5c6b831a/volumes" Jan 21 21:40:58 crc kubenswrapper[4860]: I0121 21:40:58.861349 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:58 crc kubenswrapper[4860]: E0121 21:40:58.914192 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1458c3fb_42c0_490d_9ad4_efd09aecdd43.slice/crio-dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.068529 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts\") pod \"4a538954-eacf-46bc-b4b2-44baa813c19f\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.068733 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwsc\" (UniqueName: \"kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc\") pod \"4a538954-eacf-46bc-b4b2-44baa813c19f\" (UID: \"4a538954-eacf-46bc-b4b2-44baa813c19f\") " Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.069469 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a538954-eacf-46bc-b4b2-44baa813c19f" (UID: "4a538954-eacf-46bc-b4b2-44baa813c19f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.076221 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc" (OuterVolumeSpecName: "kube-api-access-zkwsc") pod "4a538954-eacf-46bc-b4b2-44baa813c19f" (UID: "4a538954-eacf-46bc-b4b2-44baa813c19f"). InnerVolumeSpecName "kube-api-access-zkwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.180187 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a538954-eacf-46bc-b4b2-44baa813c19f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.180240 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwsc\" (UniqueName: \"kubernetes.io/projected/4a538954-eacf-46bc-b4b2-44baa813c19f-kube-api-access-zkwsc\") on node \"crc\" DevicePath \"\"" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.436862 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" event={"ID":"4a538954-eacf-46bc-b4b2-44baa813c19f","Type":"ContainerDied","Data":"e4bf8c19518b5c9f53ef4d56f19e18c466a036dcecae6ad599fbfa5af0ef1efe"} Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.436913 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bf8c19518b5c9f53ef4d56f19e18c466a036dcecae6ad599fbfa5af0ef1efe" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.437000 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera25b-account-delete-rfcd4" Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.448439 4860 generic.go:334] "Generic (PLEG): container finished" podID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerID="69374e3f725b3c8ea506cf437e466698a06c34bde306cbcb8705efa8f5a9a70f" exitCode=0 Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.448504 4860 generic.go:334] "Generic (PLEG): container finished" podID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerID="dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4" exitCode=0 Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.448540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerDied","Data":"69374e3f725b3c8ea506cf437e466698a06c34bde306cbcb8705efa8f5a9a70f"} Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.448635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerDied","Data":"dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4"} Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.963177 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j67mm"] Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.969013 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-j67mm"] Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.986630 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-a25b-account-create-update-z2swc"] Jan 21 21:40:59 crc kubenswrapper[4860]: I0121 21:40:59.994861 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchera25b-account-delete-rfcd4"] Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.002898 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchera25b-account-delete-rfcd4"] Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.014434 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-a25b-account-create-update-z2swc"] Jan 21 21:41:00 crc kubenswrapper[4860]: E0121 21:41:00.276300 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:00 crc kubenswrapper[4860]: E0121 21:41:00.282201 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:00 crc kubenswrapper[4860]: E0121 21:41:00.284608 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:00 crc kubenswrapper[4860]: E0121 21:41:00.284693 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.591275 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbcbb6f-2445-4dcd-8530-32b068ce64a5" path="/var/lib/kubelet/pods/0bbcbb6f-2445-4dcd-8530-32b068ce64a5/volumes" Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.592402 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a538954-eacf-46bc-b4b2-44baa813c19f" path="/var/lib/kubelet/pods/4a538954-eacf-46bc-b4b2-44baa813c19f/volumes" Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.592985 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc20ca9f-a742-40b7-b242-de037cc7f509" path="/var/lib/kubelet/pods/dc20ca9f-a742-40b7-b242-de037cc7f509/volumes" Jan 21 21:41:00 crc kubenswrapper[4860]: I0121 21:41:00.986398 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.012348 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs\") pod \"a3203815-18f3-4de0-9887-a921d5f309d3\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.012499 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle\") pod \"a3203815-18f3-4de0-9887-a921d5f309d3\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.012531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls\") pod \"a3203815-18f3-4de0-9887-a921d5f309d3\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.012623 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xpb4\" (UniqueName: \"kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4\") pod \"a3203815-18f3-4de0-9887-a921d5f309d3\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.012690 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data\") pod \"a3203815-18f3-4de0-9887-a921d5f309d3\" (UID: \"a3203815-18f3-4de0-9887-a921d5f309d3\") " Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.013086 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs" (OuterVolumeSpecName: "logs") pod "a3203815-18f3-4de0-9887-a921d5f309d3" (UID: "a3203815-18f3-4de0-9887-a921d5f309d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.013765 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3203815-18f3-4de0-9887-a921d5f309d3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.020132 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4" (OuterVolumeSpecName: "kube-api-access-8xpb4") pod "a3203815-18f3-4de0-9887-a921d5f309d3" (UID: "a3203815-18f3-4de0-9887-a921d5f309d3"). InnerVolumeSpecName "kube-api-access-8xpb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.043584 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3203815-18f3-4de0-9887-a921d5f309d3" (UID: "a3203815-18f3-4de0-9887-a921d5f309d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.108083 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a3203815-18f3-4de0-9887-a921d5f309d3" (UID: "a3203815-18f3-4de0-9887-a921d5f309d3"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.109135 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data" (OuterVolumeSpecName: "config-data") pod "a3203815-18f3-4de0-9887-a921d5f309d3" (UID: "a3203815-18f3-4de0-9887-a921d5f309d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.115277 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.115320 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.115331 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xpb4\" (UniqueName: \"kubernetes.io/projected/a3203815-18f3-4de0-9887-a921d5f309d3-kube-api-access-8xpb4\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.115346 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3203815-18f3-4de0-9887-a921d5f309d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.471095 4860 generic.go:334] "Generic (PLEG): container finished" podID="a3203815-18f3-4de0-9887-a921d5f309d3" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" exitCode=0 Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.471159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a3203815-18f3-4de0-9887-a921d5f309d3","Type":"ContainerDied","Data":"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449"} Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.471195 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a3203815-18f3-4de0-9887-a921d5f309d3","Type":"ContainerDied","Data":"298f885270bb0c71e26f431e5bf72567cde5114f9012d53cff47536dbeb3d402"} Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.471217 4860 scope.go:117] "RemoveContainer" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.471373 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.520679 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.521299 4860 scope.go:117] "RemoveContainer" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" Jan 21 21:41:01 crc kubenswrapper[4860]: E0121 21:41:01.522391 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449\": container with ID starting with 6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449 not found: ID does not exist" containerID="6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.522497 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449"} err="failed to get container status \"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449\": rpc error: code = NotFound desc = could not find container \"6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449\": container with ID starting with 6ce4b9d5bf7003b3d70da765032757d465d4f85e3e652d9943c1b02719cd5449 not found: ID does not exist" Jan 21 21:41:01 crc kubenswrapper[4860]: I0121 21:41:01.533740 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.028580 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-2xgvp"] Jan 21 21:41:02 crc kubenswrapper[4860]: E0121 21:41:02.029146 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-api" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.029163 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-api" Jan 21 21:41:02 crc kubenswrapper[4860]: E0121 21:41:02.029185 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" containerName="watcher-decision-engine" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.029191 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" containerName="watcher-decision-engine" Jan 21 21:41:02 crc kubenswrapper[4860]: E0121 21:41:02.029204 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.029210 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" Jan 21 21:41:02 crc kubenswrapper[4860]: E0121 21:41:02.029228 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a538954-eacf-46bc-b4b2-44baa813c19f" containerName="mariadb-account-delete" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.029235 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a538954-eacf-46bc-b4b2-44baa813c19f" containerName="mariadb-account-delete" Jan 21 21:41:02 crc kubenswrapper[4860]: E0121 21:41:02.029257 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-kuttl-api-log" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.029264 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-kuttl-api-log" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.030260 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-kuttl-api-log" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.030329 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3acbc7-567c-4f78-a33b-673e5c6b831a" containerName="watcher-decision-engine" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.030354 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a538954-eacf-46bc-b4b2-44baa813c19f" containerName="mariadb-account-delete" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.030371 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" containerName="watcher-applier" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.030391 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae543dcb-2178-4d58-bc18-d5d56b268598" containerName="watcher-api" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.031346 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.059219 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv"] Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.060826 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.070517 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.081351 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2xgvp"] Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.110791 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv"] Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.139189 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tggf\" (UniqueName: \"kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.139565 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.139798 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.139947 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggjr\" (UniqueName: \"kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.240983 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.241108 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.241155 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggjr\" (UniqueName: \"kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.241206 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tggf\" (UniqueName: \"kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.242359 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.244444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.265922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tggf\" (UniqueName: \"kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf\") pod \"watcher-79b6-account-create-update-kvrkv\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.273179 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggjr\" (UniqueName: \"kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr\") pod \"watcher-db-create-2xgvp\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.361014 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.402999 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.519562 4860 generic.go:334] "Generic (PLEG): container finished" podID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerID="30e974f40c4a96845e4d5dfc91b4cf60c10f4ae4ba7ba92a97c8236f3996b827" exitCode=0 Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.519630 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerDied","Data":"30e974f40c4a96845e4d5dfc91b4cf60c10f4ae4ba7ba92a97c8236f3996b827"} Jan 21 21:41:02 crc kubenswrapper[4860]: I0121 21:41:02.636452 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3203815-18f3-4de0-9887-a921d5f309d3" path="/var/lib/kubelet/pods/a3203815-18f3-4de0-9887-a921d5f309d3/volumes" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.001437 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2xgvp"] Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.201832 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.263575 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv"] Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.266437 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.266979 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267010 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267210 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267257 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267309 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.267390 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlfbb\" (UniqueName: \"kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb\") pod \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\" (UID: \"1458c3fb-42c0-490d-9ad4-efd09aecdd43\") " Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.269074 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.277520 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.280171 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb" (OuterVolumeSpecName: "kube-api-access-jlfbb") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "kube-api-access-jlfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.280299 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts" (OuterVolumeSpecName: "scripts") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.370116 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlfbb\" (UniqueName: \"kubernetes.io/projected/1458c3fb-42c0-490d-9ad4-efd09aecdd43-kube-api-access-jlfbb\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.370163 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.370175 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.370190 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1458c3fb-42c0-490d-9ad4-efd09aecdd43-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.391781 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.444693 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.474502 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.474553 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.494655 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.530591 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data" (OuterVolumeSpecName: "config-data") pod "1458c3fb-42c0-490d-9ad4-efd09aecdd43" (UID: "1458c3fb-42c0-490d-9ad4-efd09aecdd43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.531978 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" event={"ID":"0e539438-d83d-4693-8e38-f3afd267bede","Type":"ContainerStarted","Data":"4c6146e9d4ff6a2fac8d0b1f8efc123b9eea0950b8830f1434d483fe2d494da7"} Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.535249 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2xgvp" event={"ID":"6922693a-30ad-444f-a711-f68a403d2690","Type":"ContainerStarted","Data":"f9e3dc88bc938e9cfa07715dd5eb4da9ea6c41aee21f58e0169f9413ef563d22"} Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.535321 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2xgvp" event={"ID":"6922693a-30ad-444f-a711-f68a403d2690","Type":"ContainerStarted","Data":"5c698153834dd601a087a5c768cf22ad30b6420f6fb7e40d0781c8d006594505"} Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.538032 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1458c3fb-42c0-490d-9ad4-efd09aecdd43","Type":"ContainerDied","Data":"b0fbfb4090ab60d4528a8df28ce32b1de2dbd365b0fdf0ce4025216340c13e3a"} Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.538099 4860 scope.go:117] "RemoveContainer" containerID="69374e3f725b3c8ea506cf437e466698a06c34bde306cbcb8705efa8f5a9a70f" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.538122 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.562916 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-2xgvp" podStartSLOduration=1.562894636 podStartE2EDuration="1.562894636s" podCreationTimestamp="2026-01-21 21:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:03.555691093 +0000 UTC m=+1955.777869563" watchObservedRunningTime="2026-01-21 21:41:03.562894636 +0000 UTC m=+1955.785073116" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.576327 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.576372 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1458c3fb-42c0-490d-9ad4-efd09aecdd43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.592646 4860 scope.go:117] "RemoveContainer" containerID="4b7adae04519d69aff978c864e418e9a1ffc7130f1ea9b9f0cc278091da2d40b" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.594160 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.604514 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.635757 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:03 crc kubenswrapper[4860]: E0121 21:41:03.636279 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="proxy-httpd" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636302 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="proxy-httpd" Jan 21 21:41:03 crc kubenswrapper[4860]: E0121 21:41:03.636320 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-notification-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636327 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-notification-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: E0121 21:41:03.636342 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="sg-core" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636349 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="sg-core" Jan 21 21:41:03 crc kubenswrapper[4860]: E0121 21:41:03.636358 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-central-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636363 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-central-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636513 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="sg-core" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636527 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-central-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636538 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="ceilometer-notification-agent" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.636553 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" containerName="proxy-httpd" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.638375 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.645078 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.645331 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.645498 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.666203 4860 scope.go:117] "RemoveContainer" containerID="30e974f40c4a96845e4d5dfc91b4cf60c10f4ae4ba7ba92a97c8236f3996b827" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.668779 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.679604 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.680224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk42j\" (UniqueName: \"kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.680280 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.684740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.685860 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.687055 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.687592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.687766 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.706339 4860 scope.go:117] "RemoveContainer" containerID="dd1b967ae96687a74f42d0c76dad537925a122ff7e05114e641de114ca4635b4" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.789817 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.789880 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.789906 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.789951 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.789989 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk42j\" (UniqueName: \"kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.790064 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.790114 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.790144 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.792435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.793477 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.795852 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.799671 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.801526 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.801884 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.817063 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.821388 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk42j\" (UniqueName: \"kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j\") pod \"ceilometer-0\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:03 crc kubenswrapper[4860]: I0121 21:41:03.978738 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.504911 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.551115 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerStarted","Data":"4be804055c0cc33c3833ad8cfeb85d7cb70eb1a11678d6631e49ad983596f6ef"} Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.552874 4860 generic.go:334] "Generic (PLEG): container finished" podID="0e539438-d83d-4693-8e38-f3afd267bede" containerID="31bb0aa3bad73f81e96c3d82f53b9a985e67b1c9349560f1df27b350bc4dab5d" exitCode=0 Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.552989 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" event={"ID":"0e539438-d83d-4693-8e38-f3afd267bede","Type":"ContainerDied","Data":"31bb0aa3bad73f81e96c3d82f53b9a985e67b1c9349560f1df27b350bc4dab5d"} Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.555462 4860 generic.go:334] "Generic (PLEG): container finished" podID="6922693a-30ad-444f-a711-f68a403d2690" containerID="f9e3dc88bc938e9cfa07715dd5eb4da9ea6c41aee21f58e0169f9413ef563d22" exitCode=0 Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.555621 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2xgvp" event={"ID":"6922693a-30ad-444f-a711-f68a403d2690","Type":"ContainerDied","Data":"f9e3dc88bc938e9cfa07715dd5eb4da9ea6c41aee21f58e0169f9413ef563d22"} Jan 21 21:41:04 crc kubenswrapper[4860]: I0121 21:41:04.595463 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1458c3fb-42c0-490d-9ad4-efd09aecdd43" path="/var/lib/kubelet/pods/1458c3fb-42c0-490d-9ad4-efd09aecdd43/volumes" Jan 21 21:41:05 crc kubenswrapper[4860]: I0121 21:41:05.570345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerStarted","Data":"cf1f46cb1fedfb8e33cd198fe87f8c8b28ebaa0fa1918aee574cc9c371d7faaf"} Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.092260 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.105462 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.155823 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts\") pod \"0e539438-d83d-4693-8e38-f3afd267bede\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.155966 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ggjr\" (UniqueName: \"kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr\") pod \"6922693a-30ad-444f-a711-f68a403d2690\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.156047 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tggf\" (UniqueName: \"kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf\") pod \"0e539438-d83d-4693-8e38-f3afd267bede\" (UID: \"0e539438-d83d-4693-8e38-f3afd267bede\") " Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.156132 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts\") pod \"6922693a-30ad-444f-a711-f68a403d2690\" (UID: \"6922693a-30ad-444f-a711-f68a403d2690\") " Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.157709 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6922693a-30ad-444f-a711-f68a403d2690" (UID: "6922693a-30ad-444f-a711-f68a403d2690"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.157776 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e539438-d83d-4693-8e38-f3afd267bede" (UID: "0e539438-d83d-4693-8e38-f3afd267bede"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.165625 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf" (OuterVolumeSpecName: "kube-api-access-7tggf") pod "0e539438-d83d-4693-8e38-f3afd267bede" (UID: "0e539438-d83d-4693-8e38-f3afd267bede"). InnerVolumeSpecName "kube-api-access-7tggf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.181815 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr" (OuterVolumeSpecName: "kube-api-access-5ggjr") pod "6922693a-30ad-444f-a711-f68a403d2690" (UID: "6922693a-30ad-444f-a711-f68a403d2690"). InnerVolumeSpecName "kube-api-access-5ggjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.258685 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e539438-d83d-4693-8e38-f3afd267bede-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.258771 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ggjr\" (UniqueName: \"kubernetes.io/projected/6922693a-30ad-444f-a711-f68a403d2690-kube-api-access-5ggjr\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.258793 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tggf\" (UniqueName: \"kubernetes.io/projected/0e539438-d83d-4693-8e38-f3afd267bede-kube-api-access-7tggf\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.258805 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6922693a-30ad-444f-a711-f68a403d2690-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.593283 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerStarted","Data":"6b1857661b63edc474bcb2fb1cd114a2b5497f25e2621ae9278912b6e78a4974"} Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.594847 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.594851 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv" event={"ID":"0e539438-d83d-4693-8e38-f3afd267bede","Type":"ContainerDied","Data":"4c6146e9d4ff6a2fac8d0b1f8efc123b9eea0950b8830f1434d483fe2d494da7"} Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.594913 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6146e9d4ff6a2fac8d0b1f8efc123b9eea0950b8830f1434d483fe2d494da7" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.596473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2xgvp" event={"ID":"6922693a-30ad-444f-a711-f68a403d2690","Type":"ContainerDied","Data":"5c698153834dd601a087a5c768cf22ad30b6420f6fb7e40d0781c8d006594505"} Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.596502 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c698153834dd601a087a5c768cf22ad30b6420f6fb7e40d0781c8d006594505" Jan 21 21:41:06 crc kubenswrapper[4860]: I0121 21:41:06.596584 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2xgvp" Jan 21 21:41:07 crc kubenswrapper[4860]: I0121 21:41:07.609357 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerStarted","Data":"08e0284eb5a090099b75efb3ce687056d92ecdb8722d786c2b1930ab9a31fd3a"} Jan 21 21:41:08 crc kubenswrapper[4860]: I0121 21:41:08.632591 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:08 crc kubenswrapper[4860]: I0121 21:41:08.661391 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.80681746 podStartE2EDuration="5.661358637s" podCreationTimestamp="2026-01-21 21:41:03 +0000 UTC" firstStartedPulling="2026-01-21 21:41:04.513608134 +0000 UTC m=+1956.735786604" lastFinishedPulling="2026-01-21 21:41:08.368149311 +0000 UTC m=+1960.590327781" observedRunningTime="2026-01-21 21:41:08.656661682 +0000 UTC m=+1960.878840172" watchObservedRunningTime="2026-01-21 21:41:08.661358637 +0000 UTC m=+1960.883537107" Jan 21 21:41:09 crc kubenswrapper[4860]: I0121 21:41:09.643590 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerStarted","Data":"be36754fdb224fa431e2d6445f12bad36c598994f921223899c4ed8aeb5b52c2"} Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.412417 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-88k5x"] Jan 21 21:41:12 crc kubenswrapper[4860]: E0121 21:41:12.413338 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e539438-d83d-4693-8e38-f3afd267bede" containerName="mariadb-account-create-update" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.413356 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e539438-d83d-4693-8e38-f3afd267bede" containerName="mariadb-account-create-update" Jan 21 21:41:12 crc kubenswrapper[4860]: E0121 21:41:12.413367 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6922693a-30ad-444f-a711-f68a403d2690" containerName="mariadb-database-create" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.413373 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6922693a-30ad-444f-a711-f68a403d2690" containerName="mariadb-database-create" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.413536 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e539438-d83d-4693-8e38-f3afd267bede" containerName="mariadb-account-create-update" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.413560 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6922693a-30ad-444f-a711-f68a403d2690" containerName="mariadb-database-create" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.414219 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.418488 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.418641 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jtd2l" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.428978 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-88k5x"] Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.492636 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.493316 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.493457 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl76h\" (UniqueName: \"kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.493513 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.596167 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl76h\" (UniqueName: \"kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.596250 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.596317 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.596385 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.613182 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.614787 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.634999 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.636873 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl76h\" (UniqueName: \"kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h\") pod \"watcher-kuttl-db-sync-88k5x\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:12 crc kubenswrapper[4860]: I0121 21:41:12.737505 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:13 crc kubenswrapper[4860]: I0121 21:41:13.305185 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-88k5x"] Jan 21 21:41:13 crc kubenswrapper[4860]: W0121 21:41:13.305188 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd805855a_23fa_43c2_a20a_402bb8f32581.slice/crio-5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec WatchSource:0}: Error finding container 5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec: Status 404 returned error can't find the container with id 5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec Jan 21 21:41:13 crc kubenswrapper[4860]: I0121 21:41:13.718008 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" event={"ID":"d805855a-23fa-43c2-a20a-402bb8f32581","Type":"ContainerStarted","Data":"7885b1023e6930fa06a502a789002b8c487f29918594dc2cb7bdf8c47420552b"} Jan 21 21:41:13 crc kubenswrapper[4860]: I0121 21:41:13.718491 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" event={"ID":"d805855a-23fa-43c2-a20a-402bb8f32581","Type":"ContainerStarted","Data":"5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec"} Jan 21 21:41:13 crc kubenswrapper[4860]: I0121 21:41:13.741073 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" podStartSLOduration=1.741038567 podStartE2EDuration="1.741038567s" podCreationTimestamp="2026-01-21 21:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:13.734525226 +0000 UTC m=+1965.956703696" watchObservedRunningTime="2026-01-21 21:41:13.741038567 +0000 UTC m=+1965.963217057" Jan 21 21:41:15 crc kubenswrapper[4860]: I0121 21:41:15.059901 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-8wdfp"] Jan 21 21:41:15 crc kubenswrapper[4860]: I0121 21:41:15.068559 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-8wdfp"] Jan 21 21:41:16 crc kubenswrapper[4860]: I0121 21:41:16.590576 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695edaa1-d556-4a7c-bb54-fa518455069a" path="/var/lib/kubelet/pods/695edaa1-d556-4a7c-bb54-fa518455069a/volumes" Jan 21 21:41:16 crc kubenswrapper[4860]: I0121 21:41:16.748514 4860 generic.go:334] "Generic (PLEG): container finished" podID="d805855a-23fa-43c2-a20a-402bb8f32581" containerID="7885b1023e6930fa06a502a789002b8c487f29918594dc2cb7bdf8c47420552b" exitCode=0 Jan 21 21:41:16 crc kubenswrapper[4860]: I0121 21:41:16.748575 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" event={"ID":"d805855a-23fa-43c2-a20a-402bb8f32581","Type":"ContainerDied","Data":"7885b1023e6930fa06a502a789002b8c487f29918594dc2cb7bdf8c47420552b"} Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.093475 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54"] Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.096178 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.100308 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5k4pz"/"openshift-service-ca.crt" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.103042 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5k4pz"/"kube-root-ca.crt" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.125267 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54"] Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.186547 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.207392 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.207922 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.309447 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data\") pod \"d805855a-23fa-43c2-a20a-402bb8f32581\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.309668 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl76h\" (UniqueName: \"kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h\") pod \"d805855a-23fa-43c2-a20a-402bb8f32581\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.309712 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data\") pod \"d805855a-23fa-43c2-a20a-402bb8f32581\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.309789 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle\") pod \"d805855a-23fa-43c2-a20a-402bb8f32581\" (UID: \"d805855a-23fa-43c2-a20a-402bb8f32581\") " Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.310146 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.310171 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.310689 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.316974 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d805855a-23fa-43c2-a20a-402bb8f32581" (UID: "d805855a-23fa-43c2-a20a-402bb8f32581"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.329920 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq\") pod \"must-gather-t8b54\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.330927 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h" (OuterVolumeSpecName: "kube-api-access-cl76h") pod "d805855a-23fa-43c2-a20a-402bb8f32581" (UID: "d805855a-23fa-43c2-a20a-402bb8f32581"). InnerVolumeSpecName "kube-api-access-cl76h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.364013 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d805855a-23fa-43c2-a20a-402bb8f32581" (UID: "d805855a-23fa-43c2-a20a-402bb8f32581"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.371079 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data" (OuterVolumeSpecName: "config-data") pod "d805855a-23fa-43c2-a20a-402bb8f32581" (UID: "d805855a-23fa-43c2-a20a-402bb8f32581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.413490 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl76h\" (UniqueName: \"kubernetes.io/projected/d805855a-23fa-43c2-a20a-402bb8f32581-kube-api-access-cl76h\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.413535 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.413547 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.413556 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d805855a-23fa-43c2-a20a-402bb8f32581-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.500741 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.787191 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.791576 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-88k5x" event={"ID":"d805855a-23fa-43c2-a20a-402bb8f32581","Type":"ContainerDied","Data":"5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec"} Jan 21 21:41:18 crc kubenswrapper[4860]: I0121 21:41:18.791740 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca85c925bc54a6d1c2bd0c42a34a0b89bd2bae8f6530f6d1af47cb0802492ec" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.038401 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54"] Jan 21 21:41:19 crc kubenswrapper[4860]: W0121 21:41:19.041709 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c12be4_8e69_45c0_88a0_e2148aae2e90.slice/crio-c1c8c2f5ca8879192243ce7a1b6c47713df23cc6ca8bf7c04328c0ed8badd0f9 WatchSource:0}: Error finding container c1c8c2f5ca8879192243ce7a1b6c47713df23cc6ca8bf7c04328c0ed8badd0f9: Status 404 returned error can't find the container with id c1c8c2f5ca8879192243ce7a1b6c47713df23cc6ca8bf7c04328c0ed8badd0f9 Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.125463 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: E0121 21:41:19.127659 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d805855a-23fa-43c2-a20a-402bb8f32581" containerName="watcher-kuttl-db-sync" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.127714 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d805855a-23fa-43c2-a20a-402bb8f32581" containerName="watcher-kuttl-db-sync" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.128069 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d805855a-23fa-43c2-a20a-402bb8f32581" containerName="watcher-kuttl-db-sync" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.129506 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.135690 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.138430 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jtd2l" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.151231 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.166507 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.168073 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.188680 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.225515 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252316 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252687 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252757 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252808 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252924 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.252994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf99t\" (UniqueName: \"kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.253036 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.253062 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.253130 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.253265 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ml54\" (UniqueName: \"kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.253331 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.331445 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.338236 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.346039 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.346242 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356237 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356368 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356417 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356461 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356485 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf99t\" (UniqueName: \"kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356538 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356571 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356624 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ml54\" (UniqueName: \"kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.356661 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.358507 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.358875 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.364707 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.365993 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.371825 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.373886 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.378066 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.385444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.390620 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf99t\" (UniqueName: \"kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t\") pod \"watcher-kuttl-applier-0\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.393135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.394688 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ml54\" (UniqueName: \"kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54\") pod \"watcher-kuttl-api-0\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458125 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxh9\" (UniqueName: \"kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458710 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458774 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.458854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.460363 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.522320 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.561846 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.561957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.561994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.562029 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.562116 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxh9\" (UniqueName: \"kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.562155 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.565211 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.568555 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.568856 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.573491 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.574592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.586398 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxh9\" (UniqueName: \"kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.660761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:19 crc kubenswrapper[4860]: I0121 21:41:19.823214 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54" event={"ID":"f2c12be4-8e69-45c0-88a0-e2148aae2e90","Type":"ContainerStarted","Data":"c1c8c2f5ca8879192243ce7a1b6c47713df23cc6ca8bf7c04328c0ed8badd0f9"} Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.170873 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.286857 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.449781 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:20 crc kubenswrapper[4860]: W0121 21:41:20.456374 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa4f0d2_da58_4d3e_80ae_aa27ed3c7595.slice/crio-e3e41358efcd66aee0f903c70b598df832e29524dbe4231c521ee6e780280c4f WatchSource:0}: Error finding container e3e41358efcd66aee0f903c70b598df832e29524dbe4231c521ee6e780280c4f: Status 404 returned error can't find the container with id e3e41358efcd66aee0f903c70b598df832e29524dbe4231c521ee6e780280c4f Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.905321 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerStarted","Data":"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db"} Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.905866 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerStarted","Data":"6616d2d6437d3df06057de0ad5a79c2105566e26816e1742d301886f69bb8a95"} Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.914697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595","Type":"ContainerStarted","Data":"e3e41358efcd66aee0f903c70b598df832e29524dbe4231c521ee6e780280c4f"} Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.920617 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4153376c-98ed-4299-a5a4-8d1f29ce2abe","Type":"ContainerStarted","Data":"84516655dc28b72f492f7f7462696f11d37641cf40f9c244488e43be9a63dda6"} Jan 21 21:41:20 crc kubenswrapper[4860]: I0121 21:41:20.947504 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.9474630290000001 podStartE2EDuration="1.947463029s" podCreationTimestamp="2026-01-21 21:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:20.943027703 +0000 UTC m=+1973.165206193" watchObservedRunningTime="2026-01-21 21:41:20.947463029 +0000 UTC m=+1973.169641499" Jan 21 21:41:21 crc kubenswrapper[4860]: I0121 21:41:21.933865 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4153376c-98ed-4299-a5a4-8d1f29ce2abe","Type":"ContainerStarted","Data":"bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76"} Jan 21 21:41:21 crc kubenswrapper[4860]: I0121 21:41:21.936989 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerStarted","Data":"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762"} Jan 21 21:41:21 crc kubenswrapper[4860]: I0121 21:41:21.937357 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:21 crc kubenswrapper[4860]: I0121 21:41:21.941663 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595","Type":"ContainerStarted","Data":"04fedbd6ec8764d8cbfa7f90a9ef13a8e435d50247aebf169e7cda15b9e372f7"} Jan 21 21:41:21 crc kubenswrapper[4860]: I0121 21:41:21.967884 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.967858126 podStartE2EDuration="2.967858126s" podCreationTimestamp="2026-01-21 21:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:21.965744701 +0000 UTC m=+1974.187923171" watchObservedRunningTime="2026-01-21 21:41:21.967858126 +0000 UTC m=+1974.190036606" Jan 21 21:41:22 crc kubenswrapper[4860]: I0121 21:41:22.001035 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.001006662 podStartE2EDuration="3.001006662s" podCreationTimestamp="2026-01-21 21:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:21.994340455 +0000 UTC m=+1974.216518945" watchObservedRunningTime="2026-01-21 21:41:22.001006662 +0000 UTC m=+1974.223185142" Jan 21 21:41:24 crc kubenswrapper[4860]: I0121 21:41:24.462537 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:24 crc kubenswrapper[4860]: I0121 21:41:24.463305 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:41:24 crc kubenswrapper[4860]: I0121 21:41:24.523207 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:24 crc kubenswrapper[4860]: I0121 21:41:24.725024 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.469925 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.485395 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.533396 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.661771 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.685134 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:29 crc kubenswrapper[4860]: I0121 21:41:29.753339 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:30 crc kubenswrapper[4860]: I0121 21:41:30.067258 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54" event={"ID":"f2c12be4-8e69-45c0-88a0-e2148aae2e90","Type":"ContainerStarted","Data":"71116cd99910e33548b80399020d100ba2719488e4440d2a19738870d1d6cb90"} Jan 21 21:41:30 crc kubenswrapper[4860]: I0121 21:41:30.068631 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:30 crc kubenswrapper[4860]: I0121 21:41:30.082132 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:30 crc kubenswrapper[4860]: I0121 21:41:30.117562 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:30 crc kubenswrapper[4860]: I0121 21:41:30.120970 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:31 crc kubenswrapper[4860]: I0121 21:41:31.077074 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54" event={"ID":"f2c12be4-8e69-45c0-88a0-e2148aae2e90","Type":"ContainerStarted","Data":"f2d8b390669fccd91ae8f536452c388a7318d6b54bdf7b11e46639a43dde4642"} Jan 21 21:41:31 crc kubenswrapper[4860]: I0121 21:41:31.106056 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5k4pz/must-gather-t8b54" podStartSLOduration=2.7998935019999998 podStartE2EDuration="13.106027385s" podCreationTimestamp="2026-01-21 21:41:18 +0000 UTC" firstStartedPulling="2026-01-21 21:41:19.044769353 +0000 UTC m=+1971.266947813" lastFinishedPulling="2026-01-21 21:41:29.350903226 +0000 UTC m=+1981.573081696" observedRunningTime="2026-01-21 21:41:31.099706779 +0000 UTC m=+1983.321885249" watchObservedRunningTime="2026-01-21 21:41:31.106027385 +0000 UTC m=+1983.328205855" Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.293741 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.294549 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-central-agent" containerID="cri-o://cf1f46cb1fedfb8e33cd198fe87f8c8b28ebaa0fa1918aee574cc9c371d7faaf" gracePeriod=30 Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.294720 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-notification-agent" containerID="cri-o://6b1857661b63edc474bcb2fb1cd114a2b5497f25e2621ae9278912b6e78a4974" gracePeriod=30 Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.294711 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="sg-core" containerID="cri-o://08e0284eb5a090099b75efb3ce687056d92ecdb8722d786c2b1930ab9a31fd3a" gracePeriod=30 Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.294992 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" containerID="cri-o://be36754fdb224fa431e2d6445f12bad36c598994f921223899c4ed8aeb5b52c2" gracePeriod=30 Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.408678 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": read tcp 10.217.0.2:48284->10.217.0.213:3000: read: connection reset by peer" Jan 21 21:41:33 crc kubenswrapper[4860]: I0121 21:41:33.980278 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": dial tcp 10.217.0.213:3000: connect: connection refused" Jan 21 21:41:34 crc kubenswrapper[4860]: I0121 21:41:34.213586 4860 generic.go:334] "Generic (PLEG): container finished" podID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerID="be36754fdb224fa431e2d6445f12bad36c598994f921223899c4ed8aeb5b52c2" exitCode=0 Jan 21 21:41:34 crc kubenswrapper[4860]: I0121 21:41:34.214083 4860 generic.go:334] "Generic (PLEG): container finished" podID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerID="08e0284eb5a090099b75efb3ce687056d92ecdb8722d786c2b1930ab9a31fd3a" exitCode=2 Jan 21 21:41:34 crc kubenswrapper[4860]: I0121 21:41:34.214141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerDied","Data":"be36754fdb224fa431e2d6445f12bad36c598994f921223899c4ed8aeb5b52c2"} Jan 21 21:41:34 crc kubenswrapper[4860]: I0121 21:41:34.214301 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerDied","Data":"08e0284eb5a090099b75efb3ce687056d92ecdb8722d786c2b1930ab9a31fd3a"} Jan 21 21:41:35 crc kubenswrapper[4860]: I0121 21:41:35.229658 4860 generic.go:334] "Generic (PLEG): container finished" podID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerID="cf1f46cb1fedfb8e33cd198fe87f8c8b28ebaa0fa1918aee574cc9c371d7faaf" exitCode=0 Jan 21 21:41:35 crc kubenswrapper[4860]: I0121 21:41:35.229731 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerDied","Data":"cf1f46cb1fedfb8e33cd198fe87f8c8b28ebaa0fa1918aee574cc9c371d7faaf"} Jan 21 21:41:38 crc kubenswrapper[4860]: I0121 21:41:38.647517 4860 scope.go:117] "RemoveContainer" containerID="4d2faa002ef1a13f9a70c36d0fe905f19370c4899d739453f973a734a1998317" Jan 21 21:41:38 crc kubenswrapper[4860]: I0121 21:41:38.966207 4860 scope.go:117] "RemoveContainer" containerID="b5084912036c77078d058de68911d6ad2f6c077af202d00e0507d3380ed0b59f" Jan 21 21:41:39 crc kubenswrapper[4860]: I0121 21:41:39.288999 4860 generic.go:334] "Generic (PLEG): container finished" podID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerID="6b1857661b63edc474bcb2fb1cd114a2b5497f25e2621ae9278912b6e78a4974" exitCode=0 Jan 21 21:41:39 crc kubenswrapper[4860]: I0121 21:41:39.289090 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerDied","Data":"6b1857661b63edc474bcb2fb1cd114a2b5497f25e2621ae9278912b6e78a4974"} Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.827317 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980262 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980343 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980406 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk42j\" (UniqueName: \"kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980508 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980666 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980727 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.980777 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle\") pod \"ddc40713-b77c-4525-901f-224ce1a25b4f\" (UID: \"ddc40713-b77c-4525-901f-224ce1a25b4f\") " Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.981041 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.981202 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.981354 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.981382 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddc40713-b77c-4525-901f-224ce1a25b4f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.993234 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j" (OuterVolumeSpecName: "kube-api-access-lk42j") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "kube-api-access-lk42j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:40 crc kubenswrapper[4860]: I0121 21:41:40.999148 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts" (OuterVolumeSpecName: "scripts") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.025325 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.037064 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.059158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.097182 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.097252 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk42j\" (UniqueName: \"kubernetes.io/projected/ddc40713-b77c-4525-901f-224ce1a25b4f-kube-api-access-lk42j\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.097273 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.097285 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.097295 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.120986 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data" (OuterVolumeSpecName: "config-data") pod "ddc40713-b77c-4525-901f-224ce1a25b4f" (UID: "ddc40713-b77c-4525-901f-224ce1a25b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.199379 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc40713-b77c-4525-901f-224ce1a25b4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.313631 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ddc40713-b77c-4525-901f-224ce1a25b4f","Type":"ContainerDied","Data":"4be804055c0cc33c3833ad8cfeb85d7cb70eb1a11678d6631e49ad983596f6ef"} Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.313767 4860 scope.go:117] "RemoveContainer" containerID="be36754fdb224fa431e2d6445f12bad36c598994f921223899c4ed8aeb5b52c2" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.313860 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.347320 4860 scope.go:117] "RemoveContainer" containerID="08e0284eb5a090099b75efb3ce687056d92ecdb8722d786c2b1930ab9a31fd3a" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.362117 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.369288 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.376364 4860 scope.go:117] "RemoveContainer" containerID="6b1857661b63edc474bcb2fb1cd114a2b5497f25e2621ae9278912b6e78a4974" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401004 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:41 crc kubenswrapper[4860]: E0121 21:41:41.401574 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="sg-core" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401602 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="sg-core" Jan 21 21:41:41 crc kubenswrapper[4860]: E0121 21:41:41.401625 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401634 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: E0121 21:41:41.401650 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401658 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" Jan 21 21:41:41 crc kubenswrapper[4860]: E0121 21:41:41.401680 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-central-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401688 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-central-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401923 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401976 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="sg-core" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401988 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="proxy-httpd" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.401996 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" containerName="ceilometer-central-agent" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.405042 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.410475 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.411377 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.411387 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.426339 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.434059 4860 scope.go:117] "RemoveContainer" containerID="cf1f46cb1fedfb8e33cd198fe87f8c8b28ebaa0fa1918aee574cc9c371d7faaf" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.502972 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8m5k\" (UniqueName: \"kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503418 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503542 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503639 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503737 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503842 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.503993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.504125 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.604991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.605355 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8m5k\" (UniqueName: \"kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.605449 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.605783 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.605898 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.606042 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.606170 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.606330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.606630 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.607472 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.611366 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.611614 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.611861 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.612370 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.614300 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.635527 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8m5k\" (UniqueName: \"kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k\") pod \"ceilometer-0\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:41 crc kubenswrapper[4860]: I0121 21:41:41.736727 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.255959 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.326644 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerStarted","Data":"05112007a123904472c56a5e82dab935db7d000bcbcc6692e26403a1d5bb38e7"} Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.476381 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-88k5x"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.489372 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-88k5x"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.563684 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher79b6-account-delete-9pxvs"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.565050 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.604063 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d805855a-23fa-43c2-a20a-402bb8f32581" path="/var/lib/kubelet/pods/d805855a-23fa-43c2-a20a-402bb8f32581/volumes" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.605144 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc40713-b77c-4525-901f-224ce1a25b4f" path="/var/lib/kubelet/pods/ddc40713-b77c-4525-901f-224ce1a25b4f/volumes" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.614425 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher79b6-account-delete-9pxvs"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.672662 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.673089 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" containerName="watcher-decision-engine" containerID="cri-o://04fedbd6ec8764d8cbfa7f90a9ef13a8e435d50247aebf169e7cda15b9e372f7" gracePeriod=30 Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.729768 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6zp\" (UniqueName: \"kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.729908 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.830039 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.831386 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6zp\" (UniqueName: \"kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.831526 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.832330 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-kuttl-api-log" containerID="cri-o://6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db" gracePeriod=30 Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.832578 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.832903 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-api" containerID="cri-o://b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762" gracePeriod=30 Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.865048 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.865361 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerName="watcher-applier" containerID="cri-o://bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" gracePeriod=30 Jan 21 21:41:42 crc kubenswrapper[4860]: I0121 21:41:42.896839 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6zp\" (UniqueName: \"kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp\") pod \"watcher79b6-account-delete-9pxvs\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:43 crc kubenswrapper[4860]: I0121 21:41:43.190499 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:43 crc kubenswrapper[4860]: I0121 21:41:43.370074 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerStarted","Data":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} Jan 21 21:41:43 crc kubenswrapper[4860]: I0121 21:41:43.383285 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9b98fb-224f-4b32-9df5-29510803f415" containerID="6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db" exitCode=143 Jan 21 21:41:43 crc kubenswrapper[4860]: I0121 21:41:43.383346 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerDied","Data":"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db"} Jan 21 21:41:43 crc kubenswrapper[4860]: I0121 21:41:43.763671 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher79b6-account-delete-9pxvs"] Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.423015 4860 generic.go:334] "Generic (PLEG): container finished" podID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" containerID="04fedbd6ec8764d8cbfa7f90a9ef13a8e435d50247aebf169e7cda15b9e372f7" exitCode=0 Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.423404 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595","Type":"ContainerDied","Data":"04fedbd6ec8764d8cbfa7f90a9ef13a8e435d50247aebf169e7cda15b9e372f7"} Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.437035 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerStarted","Data":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.439334 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" event={"ID":"ef4488d5-b434-41d2-96ac-5fdf02a677a2","Type":"ContainerStarted","Data":"3f108f1bdce218e6dc5fe85fe651f929d8a54c80adfb2e7342c9eccca63769ff"} Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.439375 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" event={"ID":"ef4488d5-b434-41d2-96ac-5fdf02a677a2","Type":"ContainerStarted","Data":"68bfa555d68824de8c57dfdf02f3636a8dde5172de5a4127cdc9c13448b91528"} Jan 21 21:41:44 crc kubenswrapper[4860]: E0121 21:41:44.532425 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:44 crc kubenswrapper[4860]: E0121 21:41:44.544453 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:44 crc kubenswrapper[4860]: E0121 21:41:44.548554 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:41:44 crc kubenswrapper[4860]: E0121 21:41:44.548602 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerName="watcher-applier" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.651837 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.667299 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.216:9322/\": read tcp 10.217.0.2:51234->10.217.0.216:9322: read: connection reset by peer" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.667819 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.216:9322/\": read tcp 10.217.0.2:51242->10.217.0.216:9322: read: connection reset by peer" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729326 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729445 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729582 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729610 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729661 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.729699 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxh9\" (UniqueName: \"kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9\") pod \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\" (UID: \"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595\") " Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.731021 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs" (OuterVolumeSpecName: "logs") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.742693 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9" (OuterVolumeSpecName: "kube-api-access-wjxh9") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "kube-api-access-wjxh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.768172 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.774486 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.829658 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data" (OuterVolumeSpecName: "config-data") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.834188 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.834239 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxh9\" (UniqueName: \"kubernetes.io/projected/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-kube-api-access-wjxh9\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.834256 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.834273 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.834287 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:44 crc kubenswrapper[4860]: I0121 21:41:44.937252 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" (UID: "bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.039491 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.234047 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.346917 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.347034 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.347147 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.347195 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ml54\" (UniqueName: \"kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.347233 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.347305 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data\") pod \"9a9b98fb-224f-4b32-9df5-29510803f415\" (UID: \"9a9b98fb-224f-4b32-9df5-29510803f415\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.348484 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs" (OuterVolumeSpecName: "logs") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.353993 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54" (OuterVolumeSpecName: "kube-api-access-4ml54") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "kube-api-access-4ml54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.379150 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.381394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.408707 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data" (OuterVolumeSpecName: "config-data") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.433740 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9a9b98fb-224f-4b32-9df5-29510803f415" (UID: "9a9b98fb-224f-4b32-9df5-29510803f415"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.450649 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.451215 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9b98fb-224f-4b32-9df5-29510803f415-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.451228 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.451240 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ml54\" (UniqueName: \"kubernetes.io/projected/9a9b98fb-224f-4b32-9df5-29510803f415-kube-api-access-4ml54\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.451253 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.451263 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9b98fb-224f-4b32-9df5-29510803f415-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.480615 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerStarted","Data":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.484630 4860 generic.go:334] "Generic (PLEG): container finished" podID="ef4488d5-b434-41d2-96ac-5fdf02a677a2" containerID="3f108f1bdce218e6dc5fe85fe651f929d8a54c80adfb2e7342c9eccca63769ff" exitCode=0 Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.484743 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" event={"ID":"ef4488d5-b434-41d2-96ac-5fdf02a677a2","Type":"ContainerDied","Data":"3f108f1bdce218e6dc5fe85fe651f929d8a54c80adfb2e7342c9eccca63769ff"} Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.492174 4860 generic.go:334] "Generic (PLEG): container finished" podID="9a9b98fb-224f-4b32-9df5-29510803f415" containerID="b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762" exitCode=0 Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.492294 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.492290 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerDied","Data":"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762"} Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.492601 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9a9b98fb-224f-4b32-9df5-29510803f415","Type":"ContainerDied","Data":"6616d2d6437d3df06057de0ad5a79c2105566e26816e1742d301886f69bb8a95"} Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.492631 4860 scope.go:117] "RemoveContainer" containerID="b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.498472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595","Type":"ContainerDied","Data":"e3e41358efcd66aee0f903c70b598df832e29524dbe4231c521ee6e780280c4f"} Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.498857 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.528859 4860 scope.go:117] "RemoveContainer" containerID="6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.547055 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.561608 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.568143 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.568644 4860 scope.go:117] "RemoveContainer" containerID="b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762" Jan 21 21:41:45 crc kubenswrapper[4860]: E0121 21:41:45.571760 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762\": container with ID starting with b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762 not found: ID does not exist" containerID="b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.571812 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762"} err="failed to get container status \"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762\": rpc error: code = NotFound desc = could not find container \"b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762\": container with ID starting with b57834f1e414c00d1bbfe06317da0ebc90a1bd16eed7f1be7becda4a60c5a762 not found: ID does not exist" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.571852 4860 scope.go:117] "RemoveContainer" containerID="6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.574771 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:41:45 crc kubenswrapper[4860]: E0121 21:41:45.575278 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db\": container with ID starting with 6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db not found: ID does not exist" containerID="6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.575323 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db"} err="failed to get container status \"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db\": rpc error: code = NotFound desc = could not find container \"6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db\": container with ID starting with 6446fe4376bcfb5bff58f1f6019a10919d8bac23f36f667cd7b41faaae8ad1db not found: ID does not exist" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.575350 4860 scope.go:117] "RemoveContainer" containerID="04fedbd6ec8764d8cbfa7f90a9ef13a8e435d50247aebf169e7cda15b9e372f7" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.905150 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.960540 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6zp\" (UniqueName: \"kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp\") pod \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.960971 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts\") pod \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\" (UID: \"ef4488d5-b434-41d2-96ac-5fdf02a677a2\") " Jan 21 21:41:45 crc kubenswrapper[4860]: I0121 21:41:45.962255 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef4488d5-b434-41d2-96ac-5fdf02a677a2" (UID: "ef4488d5-b434-41d2-96ac-5fdf02a677a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:45.998178 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp" (OuterVolumeSpecName: "kube-api-access-lq6zp") pod "ef4488d5-b434-41d2-96ac-5fdf02a677a2" (UID: "ef4488d5-b434-41d2-96ac-5fdf02a677a2"). InnerVolumeSpecName "kube-api-access-lq6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.063874 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4488d5-b434-41d2-96ac-5fdf02a677a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.063959 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6zp\" (UniqueName: \"kubernetes.io/projected/ef4488d5-b434-41d2-96ac-5fdf02a677a2-kube-api-access-lq6zp\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.512700 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" event={"ID":"ef4488d5-b434-41d2-96ac-5fdf02a677a2","Type":"ContainerDied","Data":"68bfa555d68824de8c57dfdf02f3636a8dde5172de5a4127cdc9c13448b91528"} Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.512774 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bfa555d68824de8c57dfdf02f3636a8dde5172de5a4127cdc9c13448b91528" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.512862 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher79b6-account-delete-9pxvs" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.540627 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerStarted","Data":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.541111 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.579992 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.927574546 podStartE2EDuration="5.579963885s" podCreationTimestamp="2026-01-21 21:41:41 +0000 UTC" firstStartedPulling="2026-01-21 21:41:42.263676246 +0000 UTC m=+1994.485854716" lastFinishedPulling="2026-01-21 21:41:45.916065585 +0000 UTC m=+1998.138244055" observedRunningTime="2026-01-21 21:41:46.569980166 +0000 UTC m=+1998.792158636" watchObservedRunningTime="2026-01-21 21:41:46.579963885 +0000 UTC m=+1998.802142355" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.593020 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" path="/var/lib/kubelet/pods/9a9b98fb-224f-4b32-9df5-29510803f415/volumes" Jan 21 21:41:46 crc kubenswrapper[4860]: I0121 21:41:46.593724 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" path="/var/lib/kubelet/pods/bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595/volumes" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.552954 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.636195 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2xgvp"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.646655 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2xgvp"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.679457 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.690824 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher79b6-account-delete-9pxvs"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.705678 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher79b6-account-delete-9pxvs"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.718026 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-79b6-account-create-update-kvrkv"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.725458 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-mh86c"] Jan 21 21:41:47 crc kubenswrapper[4860]: E0121 21:41:47.726086 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-kuttl-api-log" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726118 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-kuttl-api-log" Jan 21 21:41:47 crc kubenswrapper[4860]: E0121 21:41:47.726147 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" containerName="watcher-decision-engine" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726156 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" containerName="watcher-decision-engine" Jan 21 21:41:47 crc kubenswrapper[4860]: E0121 21:41:47.726186 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4488d5-b434-41d2-96ac-5fdf02a677a2" containerName="mariadb-account-delete" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726195 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4488d5-b434-41d2-96ac-5fdf02a677a2" containerName="mariadb-account-delete" Jan 21 21:41:47 crc kubenswrapper[4860]: E0121 21:41:47.726206 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-api" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726216 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-api" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726436 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-kuttl-api-log" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726460 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa4f0d2-da58-4d3e-80ae-aa27ed3c7595" containerName="watcher-decision-engine" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726496 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9b98fb-224f-4b32-9df5-29510803f415" containerName="watcher-api" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.726514 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4488d5-b434-41d2-96ac-5fdf02a677a2" containerName="mariadb-account-delete" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.727468 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.746090 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mh86c"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.799190 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.799392 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjt4\" (UniqueName: \"kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.834460 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-hdjlt"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.845650 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.848593 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.849198 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-hdjlt"] Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.901418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjt4\" (UniqueName: \"kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.901491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnwl\" (UniqueName: \"kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.901531 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.901566 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.902519 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:47 crc kubenswrapper[4860]: I0121 21:41:47.942960 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjt4\" (UniqueName: \"kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4\") pod \"watcher-db-create-mh86c\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.003619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnwl\" (UniqueName: \"kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.003730 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.004649 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.029230 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnwl\" (UniqueName: \"kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl\") pod \"watcher-test-account-create-update-hdjlt\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.062010 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.165682 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.582901 4860 generic.go:334] "Generic (PLEG): container finished" podID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerID="bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" exitCode=0 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.586780 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="proxy-httpd" containerID="cri-o://e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" gracePeriod=30 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.586772 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="sg-core" containerID="cri-o://7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" gracePeriod=30 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.587038 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-notification-agent" containerID="cri-o://2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" gracePeriod=30 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.587140 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-central-agent" containerID="cri-o://a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" gracePeriod=30 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.598324 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e539438-d83d-4693-8e38-f3afd267bede" path="/var/lib/kubelet/pods/0e539438-d83d-4693-8e38-f3afd267bede/volumes" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.599401 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6922693a-30ad-444f-a711-f68a403d2690" path="/var/lib/kubelet/pods/6922693a-30ad-444f-a711-f68a403d2690/volumes" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.600881 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4488d5-b434-41d2-96ac-5fdf02a677a2" path="/var/lib/kubelet/pods/ef4488d5-b434-41d2-96ac-5fdf02a677a2/volumes" Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.601569 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mh86c"] Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.601707 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4153376c-98ed-4299-a5a4-8d1f29ce2abe","Type":"ContainerDied","Data":"bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76"} Jan 21 21:41:48 crc kubenswrapper[4860]: W0121 21:41:48.618667 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe6b5ccd_c2f2_4a25_bb24_7e5969f778e7.slice/crio-d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6 WatchSource:0}: Error finding container d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6: Status 404 returned error can't find the container with id d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.791231 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-hdjlt"] Jan 21 21:41:48 crc kubenswrapper[4860]: W0121 21:41:48.807142 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaa83b7a_ce4f_4bbe_95d8_7a0eda6f07ff.slice/crio-61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5 WatchSource:0}: Error finding container 61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5: Status 404 returned error can't find the container with id 61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5 Jan 21 21:41:48 crc kubenswrapper[4860]: I0121 21:41:48.929329 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.028008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle\") pod \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.028646 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf99t\" (UniqueName: \"kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t\") pod \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.028746 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data\") pod \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.028786 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls\") pod \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.028853 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs\") pod \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\" (UID: \"4153376c-98ed-4299-a5a4-8d1f29ce2abe\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.029794 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs" (OuterVolumeSpecName: "logs") pod "4153376c-98ed-4299-a5a4-8d1f29ce2abe" (UID: "4153376c-98ed-4299-a5a4-8d1f29ce2abe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.040048 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t" (OuterVolumeSpecName: "kube-api-access-pf99t") pod "4153376c-98ed-4299-a5a4-8d1f29ce2abe" (UID: "4153376c-98ed-4299-a5a4-8d1f29ce2abe"). InnerVolumeSpecName "kube-api-access-pf99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.070995 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4153376c-98ed-4299-a5a4-8d1f29ce2abe" (UID: "4153376c-98ed-4299-a5a4-8d1f29ce2abe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.100348 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data" (OuterVolumeSpecName: "config-data") pod "4153376c-98ed-4299-a5a4-8d1f29ce2abe" (UID: "4153376c-98ed-4299-a5a4-8d1f29ce2abe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.130968 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.131005 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf99t\" (UniqueName: \"kubernetes.io/projected/4153376c-98ed-4299-a5a4-8d1f29ce2abe-kube-api-access-pf99t\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.131019 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.131032 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4153376c-98ed-4299-a5a4-8d1f29ce2abe-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.136581 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4153376c-98ed-4299-a5a4-8d1f29ce2abe" (UID: "4153376c-98ed-4299-a5a4-8d1f29ce2abe"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.234488 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4153376c-98ed-4299-a5a4-8d1f29ce2abe-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.590973 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.594167 4860 generic.go:334] "Generic (PLEG): container finished" podID="fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" containerID="7c4e9e05f9bb5d753b1aea9b38dab3a528761d1a0e7dc15ff586919ecea179e0" exitCode=0 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.594281 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mh86c" event={"ID":"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7","Type":"ContainerDied","Data":"7c4e9e05f9bb5d753b1aea9b38dab3a528761d1a0e7dc15ff586919ecea179e0"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.594342 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mh86c" event={"ID":"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7","Type":"ContainerStarted","Data":"d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.596424 4860 generic.go:334] "Generic (PLEG): container finished" podID="faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" containerID="eb0eff4df3b1c006e76db7b7a5a3b7386a336010e04cc9f8f40eef5485c249aa" exitCode=0 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.596468 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" event={"ID":"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff","Type":"ContainerDied","Data":"eb0eff4df3b1c006e76db7b7a5a3b7386a336010e04cc9f8f40eef5485c249aa"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.596505 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" event={"ID":"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff","Type":"ContainerStarted","Data":"61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.599144 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.599167 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4153376c-98ed-4299-a5a4-8d1f29ce2abe","Type":"ContainerDied","Data":"84516655dc28b72f492f7f7462696f11d37641cf40f9c244488e43be9a63dda6"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.599244 4860 scope.go:117] "RemoveContainer" containerID="bda212c490106f4a123f4401f3c58509ceba9c0dbb3f5ff53922deccc24b9d76" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606570 4860 generic.go:334] "Generic (PLEG): container finished" podID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" exitCode=0 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606621 4860 generic.go:334] "Generic (PLEG): container finished" podID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" exitCode=2 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606629 4860 generic.go:334] "Generic (PLEG): container finished" podID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" exitCode=0 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606639 4860 generic.go:334] "Generic (PLEG): container finished" podID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" exitCode=0 Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606677 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerDied","Data":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606744 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerDied","Data":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606767 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerDied","Data":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606781 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerDied","Data":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f09daa3b-b912-4cf6-ab2a-372f8d955b4f","Type":"ContainerDied","Data":"05112007a123904472c56a5e82dab935db7d000bcbcc6692e26403a1d5bb38e7"} Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.606894 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.645249 4860 scope.go:117] "RemoveContainer" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.698074 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.701511 4860 scope.go:117] "RemoveContainer" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.706323 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.750779 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.750905 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.750958 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8m5k\" (UniqueName: \"kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.750996 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.751022 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.751039 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.751057 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.751092 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs\") pod \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\" (UID: \"f09daa3b-b912-4cf6-ab2a-372f8d955b4f\") " Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.752075 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.758790 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.759228 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k" (OuterVolumeSpecName: "kube-api-access-t8m5k") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "kube-api-access-t8m5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.763477 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts" (OuterVolumeSpecName: "scripts") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.776362 4860 scope.go:117] "RemoveContainer" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.812163 4860 scope.go:117] "RemoveContainer" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.898093 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8m5k\" (UniqueName: \"kubernetes.io/projected/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-kube-api-access-t8m5k\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.898206 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.898222 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.898261 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.919370 4860 scope.go:117] "RemoveContainer" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.920385 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.921143 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: E0121 21:41:49.923690 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": container with ID starting with e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6 not found: ID does not exist" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.923765 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} err="failed to get container status \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": rpc error: code = NotFound desc = could not find container \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": container with ID starting with e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6 not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.923805 4860 scope.go:117] "RemoveContainer" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:49 crc kubenswrapper[4860]: E0121 21:41:49.924380 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": container with ID starting with 7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec not found: ID does not exist" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.924438 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} err="failed to get container status \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": rpc error: code = NotFound desc = could not find container \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": container with ID starting with 7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.924479 4860 scope.go:117] "RemoveContainer" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:49 crc kubenswrapper[4860]: E0121 21:41:49.947708 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": container with ID starting with 2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0 not found: ID does not exist" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.947788 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} err="failed to get container status \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": rpc error: code = NotFound desc = could not find container \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": container with ID starting with 2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0 not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.947831 4860 scope.go:117] "RemoveContainer" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:49 crc kubenswrapper[4860]: E0121 21:41:49.948158 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": container with ID starting with a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845 not found: ID does not exist" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.948175 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} err="failed to get container status \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": rpc error: code = NotFound desc = could not find container \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": container with ID starting with a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845 not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.948190 4860 scope.go:117] "RemoveContainer" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.981375 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} err="failed to get container status \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": rpc error: code = NotFound desc = could not find container \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": container with ID starting with e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6 not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.981497 4860 scope.go:117] "RemoveContainer" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.982551 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} err="failed to get container status \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": rpc error: code = NotFound desc = could not find container \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": container with ID starting with 7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.982575 4860 scope.go:117] "RemoveContainer" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.986350 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.986854 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} err="failed to get container status \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": rpc error: code = NotFound desc = could not find container \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": container with ID starting with 2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0 not found: ID does not exist" Jan 21 21:41:49 crc kubenswrapper[4860]: I0121 21:41:49.986958 4860 scope.go:117] "RemoveContainer" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:49.991767 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} err="failed to get container status \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": rpc error: code = NotFound desc = could not find container \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": container with ID starting with a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:49.991819 4860 scope.go:117] "RemoveContainer" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:49.998432 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} err="failed to get container status \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": rpc error: code = NotFound desc = could not find container \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": container with ID starting with e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:49.998507 4860 scope.go:117] "RemoveContainer" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.001493 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.001522 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.001533 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.002316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data" (OuterVolumeSpecName: "config-data") pod "f09daa3b-b912-4cf6-ab2a-372f8d955b4f" (UID: "f09daa3b-b912-4cf6-ab2a-372f8d955b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.006198 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} err="failed to get container status \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": rpc error: code = NotFound desc = could not find container \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": container with ID starting with 7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.006312 4860 scope.go:117] "RemoveContainer" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.014158 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} err="failed to get container status \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": rpc error: code = NotFound desc = could not find container \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": container with ID starting with 2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.014232 4860 scope.go:117] "RemoveContainer" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.017071 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} err="failed to get container status \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": rpc error: code = NotFound desc = could not find container \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": container with ID starting with a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.017111 4860 scope.go:117] "RemoveContainer" containerID="e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.021071 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6"} err="failed to get container status \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": rpc error: code = NotFound desc = could not find container \"e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6\": container with ID starting with e44200446ea44772daecdd6382bc2d936a678d0c585653754d3b7e8bb67b55f6 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.021101 4860 scope.go:117] "RemoveContainer" containerID="7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.025076 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec"} err="failed to get container status \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": rpc error: code = NotFound desc = could not find container \"7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec\": container with ID starting with 7fb5d19aa36d6174248bbd3efab8cdbbea76d6192fbf0810395b0f570cb471ec not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.025110 4860 scope.go:117] "RemoveContainer" containerID="2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.034834 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0"} err="failed to get container status \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": rpc error: code = NotFound desc = could not find container \"2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0\": container with ID starting with 2cf1d4e7c4ed873fa7e8b5e447f3dd5cb2d9f880309e6a4d9ac1c808a9e305f0 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.034883 4860 scope.go:117] "RemoveContainer" containerID="a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.035478 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845"} err="failed to get container status \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": rpc error: code = NotFound desc = could not find container \"a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845\": container with ID starting with a6bce3745a619132a8088320edbd3402954680ff933219e44ea4739643249845 not found: ID does not exist" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.104068 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09daa3b-b912-4cf6-ab2a-372f8d955b4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.251977 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.261317 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.287478 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:50 crc kubenswrapper[4860]: E0121 21:41:50.288009 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerName="watcher-applier" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288029 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerName="watcher-applier" Jan 21 21:41:50 crc kubenswrapper[4860]: E0121 21:41:50.288045 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="sg-core" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288052 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="sg-core" Jan 21 21:41:50 crc kubenswrapper[4860]: E0121 21:41:50.288064 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-central-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288073 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-central-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: E0121 21:41:50.288093 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288099 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: E0121 21:41:50.288109 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="proxy-httpd" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288115 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="proxy-httpd" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288284 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" containerName="watcher-applier" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288297 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-central-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288313 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="proxy-httpd" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288322 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="ceilometer-notification-agent" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.288332 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" containerName="sg-core" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.289912 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.293343 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.294353 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.296528 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.308636 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410163 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410660 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410812 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrj7\" (UniqueName: \"kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.410946 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.411044 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.411081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513332 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513725 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513765 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrj7\" (UniqueName: \"kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513812 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513863 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.513892 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.514606 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.514673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.521148 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.525835 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.525835 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.526339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.527797 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.536510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrj7\" (UniqueName: \"kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7\") pod \"ceilometer-0\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.595466 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4153376c-98ed-4299-a5a4-8d1f29ce2abe" path="/var/lib/kubelet/pods/4153376c-98ed-4299-a5a4-8d1f29ce2abe/volumes" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.597846 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09daa3b-b912-4cf6-ab2a-372f8d955b4f" path="/var/lib/kubelet/pods/f09daa3b-b912-4cf6-ab2a-372f8d955b4f/volumes" Jan 21 21:41:50 crc kubenswrapper[4860]: I0121 21:41:50.614904 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.200221 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.207067 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:51 crc kubenswrapper[4860]: W0121 21:41:51.276264 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f95265_628f_4909_b078_c4628101396b.slice/crio-06c9f469e93000dd54020ed23dd9f30e8a6f9621d6375a982fb365472755ac78 WatchSource:0}: Error finding container 06c9f469e93000dd54020ed23dd9f30e8a6f9621d6375a982fb365472755ac78: Status 404 returned error can't find the container with id 06c9f469e93000dd54020ed23dd9f30e8a6f9621d6375a982fb365472755ac78 Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.279874 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.327595 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts\") pod \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.327739 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqnwl\" (UniqueName: \"kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl\") pod \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\" (UID: \"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff\") " Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.327810 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjt4\" (UniqueName: \"kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4\") pod \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.327868 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts\") pod \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\" (UID: \"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7\") " Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.328679 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" (UID: "fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.329097 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.329394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" (UID: "faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.333190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4" (OuterVolumeSpecName: "kube-api-access-zdjt4") pod "fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" (UID: "fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7"). InnerVolumeSpecName "kube-api-access-zdjt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.333566 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl" (OuterVolumeSpecName: "kube-api-access-wqnwl") pod "faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" (UID: "faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff"). InnerVolumeSpecName "kube-api-access-wqnwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.431410 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.431463 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqnwl\" (UniqueName: \"kubernetes.io/projected/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff-kube-api-access-wqnwl\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.431482 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjt4\" (UniqueName: \"kubernetes.io/projected/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7-kube-api-access-zdjt4\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.635319 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mh86c" event={"ID":"fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7","Type":"ContainerDied","Data":"d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6"} Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.635369 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4121e5b2ca1e1c99067d844b27f9247cd895ab29eee086ce4259c07a5e320d6" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.635442 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mh86c" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.637614 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" event={"ID":"faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff","Type":"ContainerDied","Data":"61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5"} Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.637667 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61aa4279a926334400b819bfd431d02fd38027ccc6a0c1bf22470b066c0819f5" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.637758 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-hdjlt" Jan 21 21:41:51 crc kubenswrapper[4860]: I0121 21:41:51.643271 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerStarted","Data":"06c9f469e93000dd54020ed23dd9f30e8a6f9621d6375a982fb365472755ac78"} Jan 21 21:41:52 crc kubenswrapper[4860]: I0121 21:41:52.655295 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerStarted","Data":"83a81ebb34d1ed3c18aa9b2adfa4d8e5fdacfdcf09b95088da668959ef54c300"} Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.087670 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r84fd"] Jan 21 21:41:53 crc kubenswrapper[4860]: E0121 21:41:53.088299 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" containerName="mariadb-account-create-update" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.088324 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" containerName="mariadb-account-create-update" Jan 21 21:41:53 crc kubenswrapper[4860]: E0121 21:41:53.088356 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" containerName="mariadb-database-create" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.088365 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" containerName="mariadb-database-create" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.088582 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" containerName="mariadb-account-create-update" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.088606 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" containerName="mariadb-database-create" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.089460 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.096352 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9pxlb" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.096773 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.102149 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r84fd"] Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.114110 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.114208 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.114324 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.114356 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmmg8\" (UniqueName: \"kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.215502 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.216129 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.216166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmmg8\" (UniqueName: \"kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.216269 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.230887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.231020 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.234333 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.245812 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmmg8\" (UniqueName: \"kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8\") pod \"watcher-kuttl-db-sync-r84fd\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.411283 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:53 crc kubenswrapper[4860]: I0121 21:41:53.753894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerStarted","Data":"0b5998872bf585fdad38e605b0035e0a982ca648f45ece9e1fdb27930b51e98e"} Jan 21 21:41:54 crc kubenswrapper[4860]: I0121 21:41:54.126453 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r84fd"] Jan 21 21:41:54 crc kubenswrapper[4860]: I0121 21:41:54.766952 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" event={"ID":"f06f822b-9fe0-4619-9346-0404f0ab0210","Type":"ContainerStarted","Data":"cc07409fd7b182175d87b820415cc8842fe9182701451a9c8b9cbd833c907cd9"} Jan 21 21:41:54 crc kubenswrapper[4860]: I0121 21:41:54.767480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" event={"ID":"f06f822b-9fe0-4619-9346-0404f0ab0210","Type":"ContainerStarted","Data":"e3894434bbb942e9a33a240b27382a14a01bba5e5cab34d66ec8cacd6c6c30e5"} Jan 21 21:41:54 crc kubenswrapper[4860]: I0121 21:41:54.770745 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerStarted","Data":"a91a0d77c4b6352896380a5ed9d3650dfeaa975ccc482574487cdb82d532c287"} Jan 21 21:41:54 crc kubenswrapper[4860]: I0121 21:41:54.794253 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" podStartSLOduration=1.7942031250000001 podStartE2EDuration="1.794203125s" podCreationTimestamp="2026-01-21 21:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:41:54.784867206 +0000 UTC m=+2007.007045666" watchObservedRunningTime="2026-01-21 21:41:54.794203125 +0000 UTC m=+2007.016381595" Jan 21 21:41:55 crc kubenswrapper[4860]: I0121 21:41:55.785675 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerStarted","Data":"afabf6395d19c8740502efad9b18f373180ff206b8bf3abf4858c01bf28606d1"} Jan 21 21:41:55 crc kubenswrapper[4860]: I0121 21:41:55.816855 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7315876239999999 podStartE2EDuration="5.81682294s" podCreationTimestamp="2026-01-21 21:41:50 +0000 UTC" firstStartedPulling="2026-01-21 21:41:51.279914572 +0000 UTC m=+2003.502093042" lastFinishedPulling="2026-01-21 21:41:55.365149888 +0000 UTC m=+2007.587328358" observedRunningTime="2026-01-21 21:41:55.808785761 +0000 UTC m=+2008.030964251" watchObservedRunningTime="2026-01-21 21:41:55.81682294 +0000 UTC m=+2008.039001410" Jan 21 21:41:56 crc kubenswrapper[4860]: I0121 21:41:56.794836 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:41:57 crc kubenswrapper[4860]: I0121 21:41:57.250481 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:41:57 crc kubenswrapper[4860]: I0121 21:41:57.257580 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:41:57 crc kubenswrapper[4860]: I0121 21:41:57.276151 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:41:57 crc kubenswrapper[4860]: I0121 21:41:57.824393 4860 generic.go:334] "Generic (PLEG): container finished" podID="f06f822b-9fe0-4619-9346-0404f0ab0210" containerID="cc07409fd7b182175d87b820415cc8842fe9182701451a9c8b9cbd833c907cd9" exitCode=0 Jan 21 21:41:57 crc kubenswrapper[4860]: I0121 21:41:57.825434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" event={"ID":"f06f822b-9fe0-4619-9346-0404f0ab0210","Type":"ContainerDied","Data":"cc07409fd7b182175d87b820415cc8842fe9182701451a9c8b9cbd833c907cd9"} Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.582665 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.598602 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.610504 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.621615 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.634378 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.645609 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.657789 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.670465 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.684638 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.718172 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:41:58 crc kubenswrapper[4860]: I0121 21:41:58.729408 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:58.991167 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.006199 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.236716 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.261644 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data\") pod \"f06f822b-9fe0-4619-9346-0404f0ab0210\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.261741 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmmg8\" (UniqueName: \"kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8\") pod \"f06f822b-9fe0-4619-9346-0404f0ab0210\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.261807 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle\") pod \"f06f822b-9fe0-4619-9346-0404f0ab0210\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.261858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") pod \"f06f822b-9fe0-4619-9346-0404f0ab0210\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.274311 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f06f822b-9fe0-4619-9346-0404f0ab0210" (UID: "f06f822b-9fe0-4619-9346-0404f0ab0210"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.286570 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8" (OuterVolumeSpecName: "kube-api-access-jmmg8") pod "f06f822b-9fe0-4619-9346-0404f0ab0210" (UID: "f06f822b-9fe0-4619-9346-0404f0ab0210"). InnerVolumeSpecName "kube-api-access-jmmg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.319257 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f06f822b-9fe0-4619-9346-0404f0ab0210" (UID: "f06f822b-9fe0-4619-9346-0404f0ab0210"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.362593 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data" (OuterVolumeSpecName: "config-data") pod "f06f822b-9fe0-4619-9346-0404f0ab0210" (UID: "f06f822b-9fe0-4619-9346-0404f0ab0210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.363227 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") pod \"f06f822b-9fe0-4619-9346-0404f0ab0210\" (UID: \"f06f822b-9fe0-4619-9346-0404f0ab0210\") " Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.363698 4860 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.363728 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmmg8\" (UniqueName: \"kubernetes.io/projected/f06f822b-9fe0-4619-9346-0404f0ab0210-kube-api-access-jmmg8\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.363749 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:59 crc kubenswrapper[4860]: W0121 21:41:59.363870 4860 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f06f822b-9fe0-4619-9346-0404f0ab0210/volumes/kubernetes.io~secret/config-data Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.363894 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data" (OuterVolumeSpecName: "config-data") pod "f06f822b-9fe0-4619-9346-0404f0ab0210" (UID: "f06f822b-9fe0-4619-9346-0404f0ab0210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.466155 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06f822b-9fe0-4619-9346-0404f0ab0210-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.847878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" event={"ID":"f06f822b-9fe0-4619-9346-0404f0ab0210","Type":"ContainerDied","Data":"e3894434bbb942e9a33a240b27382a14a01bba5e5cab34d66ec8cacd6c6c30e5"} Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.847960 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3894434bbb942e9a33a240b27382a14a01bba5e5cab34d66ec8cacd6c6c30e5" Jan 21 21:41:59 crc kubenswrapper[4860]: I0121 21:41:59.848057 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r84fd" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.308810 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: E0121 21:42:00.311432 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06f822b-9fe0-4619-9346-0404f0ab0210" containerName="watcher-kuttl-db-sync" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.311473 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06f822b-9fe0-4619-9346-0404f0ab0210" containerName="watcher-kuttl-db-sync" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.311722 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06f822b-9fe0-4619-9346-0404f0ab0210" containerName="watcher-kuttl-db-sync" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.313300 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.321692 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.323802 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9pxlb" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.325325 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.331611 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.334255 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.373033 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.388899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389009 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389107 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlrp\" (UniqueName: \"kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389156 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389257 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389343 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389381 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389432 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27bt\" (UniqueName: \"kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.389559 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.423090 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.424550 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.431130 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.449844 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.488652 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.490805 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493398 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493489 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493516 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27bt\" (UniqueName: \"kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493718 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493743 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493782 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlrp\" (UniqueName: \"kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493896 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493926 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbpf\" (UniqueName: \"kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.493991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.494065 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.494122 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.494175 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.494227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.494271 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.496051 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.496817 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.497907 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.500551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.508190 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.519729 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.520093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.520251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.528823 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.531346 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.532030 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.532975 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.539078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27bt\" (UniqueName: \"kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt\") pod \"watcher-kuttl-api-1\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.539756 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlrp\" (UniqueName: \"kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp\") pod \"watcher-kuttl-api-0\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.595900 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.595964 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbpf\" (UniqueName: \"kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.596039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.596144 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.600829 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.600908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.601385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.601830 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.612161 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.612316 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.613089 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.615415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbpf\" (UniqueName: \"kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.636995 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.652157 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.702548 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpb6p\" (UniqueName: \"kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.703305 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.703391 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.703510 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.703774 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.748525 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.806428 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpb6p\" (UniqueName: \"kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.806878 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.806910 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.807103 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.807215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.807285 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.817894 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.818746 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.820725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.832775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpb6p\" (UniqueName: \"kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p\") pod \"watcher-kuttl-applier-0\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:00 crc kubenswrapper[4860]: I0121 21:42:00.967541 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.254083 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.422581 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:42:01 crc kubenswrapper[4860]: W0121 21:42:01.580158 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8731e357_3b33_4bc0_8f0b_3f69dc31b93f.slice/crio-b787c602b6faf7797c406649f7b5b0722ebea56f64bdddc8c74c8f6e8f2e4a87 WatchSource:0}: Error finding container b787c602b6faf7797c406649f7b5b0722ebea56f64bdddc8c74c8f6e8f2e4a87: Status 404 returned error can't find the container with id b787c602b6faf7797c406649f7b5b0722ebea56f64bdddc8c74c8f6e8f2e4a87 Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.584672 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.639288 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.894595 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerStarted","Data":"9bea35e633708a43b7ae34b6b12ed92eefb43980b3f5ba60bd4d9751ee45c048"} Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.894676 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerStarted","Data":"f85a4f57a62b8b00d99173513c94889c21e73feb63c95fa755024792f1e3113e"} Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.902086 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1913aa9d-f183-4d88-b640-6b2be407a629","Type":"ContainerStarted","Data":"b6424811b48903c42469561597aae18066a90be011d0992cb5b6f150064f4161"} Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.904546 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8731e357-3b33-4bc0-8f0b-3f69dc31b93f","Type":"ContainerStarted","Data":"b787c602b6faf7797c406649f7b5b0722ebea56f64bdddc8c74c8f6e8f2e4a87"} Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.910422 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerStarted","Data":"de32ca698c806ee639bb8c516aa05f1087d9269904df3178759a12425124c204"} Jan 21 21:42:01 crc kubenswrapper[4860]: I0121 21:42:01.910490 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerStarted","Data":"bb90f33e48d22cfc9d185422ec888d5687066aa1aeb113293ebe1f6d501eb642"} Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.104118 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.104214 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.920712 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8731e357-3b33-4bc0-8f0b-3f69dc31b93f","Type":"ContainerStarted","Data":"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de"} Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.924562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerStarted","Data":"53b34565402eacfd3146739d1f8bc81867be6a0dca80b39f007324b55dd7aa0d"} Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.925460 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.928007 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerStarted","Data":"ae5bc36d0e71c63a0966731d8fbab3ef5e458b5496302fdc73280ad1779bc116"} Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.928988 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.930993 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1913aa9d-f183-4d88-b640-6b2be407a629","Type":"ContainerStarted","Data":"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b"} Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.952612 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.952586875 podStartE2EDuration="2.952586875s" podCreationTimestamp="2026-01-21 21:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:42:02.943945368 +0000 UTC m=+2015.166123848" watchObservedRunningTime="2026-01-21 21:42:02.952586875 +0000 UTC m=+2015.174765345" Jan 21 21:42:02 crc kubenswrapper[4860]: I0121 21:42:02.982073 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.9820469579999997 podStartE2EDuration="2.982046958s" podCreationTimestamp="2026-01-21 21:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:42:02.972513082 +0000 UTC m=+2015.194691562" watchObservedRunningTime="2026-01-21 21:42:02.982046958 +0000 UTC m=+2015.204225428" Jan 21 21:42:03 crc kubenswrapper[4860]: I0121 21:42:03.001991 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.001928892 podStartE2EDuration="3.001928892s" podCreationTimestamp="2026-01-21 21:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:42:02.996810304 +0000 UTC m=+2015.218988774" watchObservedRunningTime="2026-01-21 21:42:03.001928892 +0000 UTC m=+2015.224107362" Jan 21 21:42:03 crc kubenswrapper[4860]: I0121 21:42:03.033783 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.033751078 podStartE2EDuration="3.033751078s" podCreationTimestamp="2026-01-21 21:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:42:03.018902158 +0000 UTC m=+2015.241080648" watchObservedRunningTime="2026-01-21 21:42:03.033751078 +0000 UTC m=+2015.255929548" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.150680 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.200322 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.227408 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.274546 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.402478 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.462852 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.481596 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.516329 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.533884 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.553735 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.576129 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.621552 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.831136 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.851181 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.949753 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.950170 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.967660 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:42:04 crc kubenswrapper[4860]: I0121 21:42:04.984893 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.026730 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.039747 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.059627 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.104872 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.228097 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.637399 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.653104 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.755032 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.783680 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.802168 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.820168 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.851717 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.868511 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.959878 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.960726 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 21:42:05 crc kubenswrapper[4860]: I0121 21:42:05.969086 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:06 crc kubenswrapper[4860]: I0121 21:42:06.103817 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:42:06 crc kubenswrapper[4860]: I0121 21:42:06.160354 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:42:06 crc kubenswrapper[4860]: I0121 21:42:06.521077 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:42:06 crc kubenswrapper[4860]: I0121 21:42:06.541384 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:42:06 crc kubenswrapper[4860]: I0121 21:42:06.636846 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:07 crc kubenswrapper[4860]: I0121 21:42:07.048892 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:09 crc kubenswrapper[4860]: I0121 21:42:09.738099 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54_f2c12be4-8e69-45c0-88a0-e2148aae2e90/gather/0.log" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.638386 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.643927 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.652728 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.662491 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.749363 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.782523 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:10 crc kubenswrapper[4860]: I0121 21:42:10.967881 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.007109 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.027261 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.042133 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.044702 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.070786 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:42:11 crc kubenswrapper[4860]: I0121 21:42:11.080030 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.841796 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.842761 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-central-agent" containerID="cri-o://83a81ebb34d1ed3c18aa9b2adfa4d8e5fdacfdcf09b95088da668959ef54c300" gracePeriod=30 Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.843728 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="proxy-httpd" containerID="cri-o://afabf6395d19c8740502efad9b18f373180ff206b8bf3abf4858c01bf28606d1" gracePeriod=30 Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.843840 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="sg-core" containerID="cri-o://a91a0d77c4b6352896380a5ed9d3650dfeaa975ccc482574487cdb82d532c287" gracePeriod=30 Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.843892 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-notification-agent" containerID="cri-o://0b5998872bf585fdad38e605b0035e0a982ca648f45ece9e1fdb27930b51e98e" gracePeriod=30 Jan 21 21:42:13 crc kubenswrapper[4860]: I0121 21:42:13.852316 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.223:3000/\": EOF" Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.060433 4860 generic.go:334] "Generic (PLEG): container finished" podID="65f95265-628f-4909-b078-c4628101396b" containerID="afabf6395d19c8740502efad9b18f373180ff206b8bf3abf4858c01bf28606d1" exitCode=0 Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.060871 4860 generic.go:334] "Generic (PLEG): container finished" podID="65f95265-628f-4909-b078-c4628101396b" containerID="a91a0d77c4b6352896380a5ed9d3650dfeaa975ccc482574487cdb82d532c287" exitCode=2 Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.060674 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerDied","Data":"afabf6395d19c8740502efad9b18f373180ff206b8bf3abf4858c01bf28606d1"} Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.060917 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerDied","Data":"a91a0d77c4b6352896380a5ed9d3650dfeaa975ccc482574487cdb82d532c287"} Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.797045 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4x452_70aea1b0-13b2-43ee-a77d-10c3143e4a95/control-plane-machine-set-operator/0.log" Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.820388 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/kube-rbac-proxy/0.log" Jan 21 21:42:14 crc kubenswrapper[4860]: I0121 21:42:14.835130 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/machine-api-operator/0.log" Jan 21 21:42:15 crc kubenswrapper[4860]: I0121 21:42:15.075364 4860 generic.go:334] "Generic (PLEG): container finished" podID="65f95265-628f-4909-b078-c4628101396b" containerID="83a81ebb34d1ed3c18aa9b2adfa4d8e5fdacfdcf09b95088da668959ef54c300" exitCode=0 Jan 21 21:42:15 crc kubenswrapper[4860]: I0121 21:42:15.075446 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerDied","Data":"83a81ebb34d1ed3c18aa9b2adfa4d8e5fdacfdcf09b95088da668959ef54c300"} Jan 21 21:42:16 crc kubenswrapper[4860]: E0121 21:42:16.405187 4860 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:36708->38.102.83.227:38857: write tcp 38.102.83.227:36708->38.102.83.227:38857: write: broken pipe Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.117482 4860 generic.go:334] "Generic (PLEG): container finished" podID="65f95265-628f-4909-b078-c4628101396b" containerID="0b5998872bf585fdad38e605b0035e0a982ca648f45ece9e1fdb27930b51e98e" exitCode=0 Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.118004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerDied","Data":"0b5998872bf585fdad38e605b0035e0a982ca648f45ece9e1fdb27930b51e98e"} Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.277972 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368063 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcrj7\" (UniqueName: \"kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368146 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368345 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368393 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368439 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.368457 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle\") pod \"65f95265-628f-4909-b078-c4628101396b\" (UID: \"65f95265-628f-4909-b078-c4628101396b\") " Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.369658 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.370741 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.376596 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7" (OuterVolumeSpecName: "kube-api-access-qcrj7") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "kube-api-access-qcrj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.391610 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts" (OuterVolumeSpecName: "scripts") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.407553 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.455145 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.459678 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471444 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471581 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471643 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471717 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471782 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcrj7\" (UniqueName: \"kubernetes.io/projected/65f95265-628f-4909-b078-c4628101396b-kube-api-access-qcrj7\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471853 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.471923 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65f95265-628f-4909-b078-c4628101396b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.475036 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data" (OuterVolumeSpecName: "config-data") pod "65f95265-628f-4909-b078-c4628101396b" (UID: "65f95265-628f-4909-b078-c4628101396b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:42:18 crc kubenswrapper[4860]: I0121 21:42:18.573128 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f95265-628f-4909-b078-c4628101396b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.133861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"65f95265-628f-4909-b078-c4628101396b","Type":"ContainerDied","Data":"06c9f469e93000dd54020ed23dd9f30e8a6f9621d6375a982fb365472755ac78"} Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.133969 4860 scope.go:117] "RemoveContainer" containerID="afabf6395d19c8740502efad9b18f373180ff206b8bf3abf4858c01bf28606d1" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.134037 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.170277 4860 scope.go:117] "RemoveContainer" containerID="a91a0d77c4b6352896380a5ed9d3650dfeaa975ccc482574487cdb82d532c287" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.174384 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.182542 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.212689 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:19 crc kubenswrapper[4860]: E0121 21:42:19.213263 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="sg-core" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213285 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="sg-core" Jan 21 21:42:19 crc kubenswrapper[4860]: E0121 21:42:19.213302 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="proxy-httpd" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213310 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="proxy-httpd" Jan 21 21:42:19 crc kubenswrapper[4860]: E0121 21:42:19.213325 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-notification-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213332 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-notification-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: E0121 21:42:19.213341 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-central-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213347 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-central-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213540 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="sg-core" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213565 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="proxy-httpd" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213587 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-notification-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.213597 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f95265-628f-4909-b078-c4628101396b" containerName="ceilometer-central-agent" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.216725 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.220200 4860 scope.go:117] "RemoveContainer" containerID="0b5998872bf585fdad38e605b0035e0a982ca648f45ece9e1fdb27930b51e98e" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.222888 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.223030 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.222903 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.232050 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.287878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.287982 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288019 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288098 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288094 4860 scope.go:117] "RemoveContainer" containerID="83a81ebb34d1ed3c18aa9b2adfa4d8e5fdacfdcf09b95088da668959ef54c300" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288145 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288490 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288741 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxg9\" (UniqueName: \"kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.288773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.390780 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.390861 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.390902 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.390991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391027 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391091 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391164 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxxg9\" (UniqueName: \"kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391788 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.391877 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.397561 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.397776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.399165 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.399353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.402415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.414846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxxg9\" (UniqueName: \"kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9\") pod \"ceilometer-0\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.572859 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.963240 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:42:19 crc kubenswrapper[4860]: I0121 21:42:19.985583 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:42:20 crc kubenswrapper[4860]: I0121 21:42:20.147671 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerStarted","Data":"f5895a15ccf6211ff4955d0f8ab6b68521c28fd6170344c1b20cab0c7f399e03"} Jan 21 21:42:20 crc kubenswrapper[4860]: I0121 21:42:20.593315 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f95265-628f-4909-b078-c4628101396b" path="/var/lib/kubelet/pods/65f95265-628f-4909-b078-c4628101396b/volumes" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.079139 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt"] Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.081205 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.084090 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5k4pz"/"default-dockercfg-hhs2c" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.098196 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt"] Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.132391 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.132488 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.166956 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerStarted","Data":"71d5a7bf33d6f2cf7017920afe20403cb5753c87d57c58f84953f3d3ff7ae0c9"} Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.236776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.236970 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.237535 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.263582 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq\") pod \"must-gather-t8b54-debug-788qt\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.402581 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:21 crc kubenswrapper[4860]: I0121 21:42:21.957201 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt"] Jan 21 21:42:21 crc kubenswrapper[4860]: W0121 21:42:21.964217 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3813d0a_32c6_4eb4_8369_26f7ed4f377f.slice/crio-1029a9d0458ad0c7d991ee0a8911224a6ec2b90e68ef3b9b4d45a31e2a272739 WatchSource:0}: Error finding container 1029a9d0458ad0c7d991ee0a8911224a6ec2b90e68ef3b9b4d45a31e2a272739: Status 404 returned error can't find the container with id 1029a9d0458ad0c7d991ee0a8911224a6ec2b90e68ef3b9b4d45a31e2a272739 Jan 21 21:42:22 crc kubenswrapper[4860]: I0121 21:42:22.177841 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" event={"ID":"e3813d0a-32c6-4eb4-8369-26f7ed4f377f","Type":"ContainerStarted","Data":"1029a9d0458ad0c7d991ee0a8911224a6ec2b90e68ef3b9b4d45a31e2a272739"} Jan 21 21:42:22 crc kubenswrapper[4860]: I0121 21:42:22.193094 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerStarted","Data":"b7cb7644788f0bceef302fcaf16abd212555cc88959fc2c28351e514187b1764"} Jan 21 21:42:23 crc kubenswrapper[4860]: I0121 21:42:23.204058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" event={"ID":"e3813d0a-32c6-4eb4-8369-26f7ed4f377f","Type":"ContainerStarted","Data":"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800"} Jan 21 21:42:23 crc kubenswrapper[4860]: I0121 21:42:23.204570 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" event={"ID":"e3813d0a-32c6-4eb4-8369-26f7ed4f377f","Type":"ContainerStarted","Data":"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b"} Jan 21 21:42:23 crc kubenswrapper[4860]: I0121 21:42:23.207502 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerStarted","Data":"4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d"} Jan 21 21:42:24 crc kubenswrapper[4860]: I0121 21:42:24.199656 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:42:24 crc kubenswrapper[4860]: I0121 21:42:24.220981 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:42:24 crc kubenswrapper[4860]: I0121 21:42:24.240929 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:42:25 crc kubenswrapper[4860]: I0121 21:42:25.238807 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerStarted","Data":"96080baf45b6d4c90048eda95f7a144f287611e6c577d4feb326064863ffd4bd"} Jan 21 21:42:25 crc kubenswrapper[4860]: I0121 21:42:25.239617 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:25 crc kubenswrapper[4860]: I0121 21:42:25.278274 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" podStartSLOduration=4.278232107 podStartE2EDuration="4.278232107s" podCreationTimestamp="2026-01-21 21:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:42:23.232759771 +0000 UTC m=+2035.454938261" watchObservedRunningTime="2026-01-21 21:42:25.278232107 +0000 UTC m=+2037.500410577" Jan 21 21:42:25 crc kubenswrapper[4860]: I0121 21:42:25.279720 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9660875770000001 podStartE2EDuration="6.279709203s" podCreationTimestamp="2026-01-21 21:42:19 +0000 UTC" firstStartedPulling="2026-01-21 21:42:19.985232365 +0000 UTC m=+2032.207410825" lastFinishedPulling="2026-01-21 21:42:24.298853971 +0000 UTC m=+2036.521032451" observedRunningTime="2026-01-21 21:42:25.267425523 +0000 UTC m=+2037.489604003" watchObservedRunningTime="2026-01-21 21:42:25.279709203 +0000 UTC m=+2037.501887673" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.223522 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.226378 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.257444 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.364371 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.364430 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.364640 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddv8m\" (UniqueName: \"kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.466614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.467127 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.467217 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddv8m\" (UniqueName: \"kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.468505 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.468547 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.521848 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddv8m\" (UniqueName: \"kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m\") pod \"redhat-operators-v5l9m\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:31 crc kubenswrapper[4860]: I0121 21:42:31.546676 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.082287 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.103817 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.103891 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.318869 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerStarted","Data":"67303e99b2cd82fa8e18d5c5e853910636dde2fcb50c3c3779717ad95d9d798c"} Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.498773 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-82rm8_b6c5b0be-96f9-4141-a721-54ca98a89d93/nmstate-console-plugin/0.log" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.522841 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66jdw_4ccac8fa-d2c8-4110-9bd4-78a6340612f9/nmstate-handler/0.log" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.546192 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/nmstate-metrics/0.log" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.559593 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/kube-rbac-proxy/0.log" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.575571 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tpllw_5f9bf17c-9142-474a-8a94-7e8cc90702f0/nmstate-operator/0.log" Jan 21 21:42:32 crc kubenswrapper[4860]: I0121 21:42:32.621487 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wnc66_cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d/nmstate-webhook/0.log" Jan 21 21:42:33 crc kubenswrapper[4860]: I0121 21:42:33.331157 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerID="c2d484d9c7097ec703eaa74c8b629dee931f550ac9c1729b90b0b13befdb0baa" exitCode=0 Jan 21 21:42:33 crc kubenswrapper[4860]: I0121 21:42:33.331251 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerDied","Data":"c2d484d9c7097ec703eaa74c8b629dee931f550ac9c1729b90b0b13befdb0baa"} Jan 21 21:42:35 crc kubenswrapper[4860]: I0121 21:42:35.357376 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerStarted","Data":"1872c6ad4a70c90c2988c6facb23243d6b4aaaf88b8a0dad0bc8a40e7e516770"} Jan 21 21:42:37 crc kubenswrapper[4860]: I0121 21:42:37.382762 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerID="1872c6ad4a70c90c2988c6facb23243d6b4aaaf88b8a0dad0bc8a40e7e516770" exitCode=0 Jan 21 21:42:37 crc kubenswrapper[4860]: I0121 21:42:37.383035 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerDied","Data":"1872c6ad4a70c90c2988c6facb23243d6b4aaaf88b8a0dad0bc8a40e7e516770"} Jan 21 21:42:39 crc kubenswrapper[4860]: I0121 21:42:39.411900 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerStarted","Data":"f489394d0e876bcb493c4aff3e70147ddcf3be9efe7b96ea1182b9768cc43862"} Jan 21 21:42:39 crc kubenswrapper[4860]: I0121 21:42:39.449821 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v5l9m" podStartSLOduration=3.327184414 podStartE2EDuration="8.449795833s" podCreationTimestamp="2026-01-21 21:42:31 +0000 UTC" firstStartedPulling="2026-01-21 21:42:33.334536278 +0000 UTC m=+2045.556714748" lastFinishedPulling="2026-01-21 21:42:38.457147697 +0000 UTC m=+2050.679326167" observedRunningTime="2026-01-21 21:42:39.447126731 +0000 UTC m=+2051.669305211" watchObservedRunningTime="2026-01-21 21:42:39.449795833 +0000 UTC m=+2051.671974303" Jan 21 21:42:39 crc kubenswrapper[4860]: I0121 21:42:39.950281 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q67c7_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f/prometheus-operator/0.log" Jan 21 21:42:39 crc kubenswrapper[4860]: I0121 21:42:39.962383 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_a1ce9223-1adf-48f8-a0bf-31ce28e5719f/prometheus-operator-admission-webhook/0.log" Jan 21 21:42:39 crc kubenswrapper[4860]: I0121 21:42:39.978668 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_b2f8b6ee-0b46-4492-ae99-aea050eed563/prometheus-operator-admission-webhook/0.log" Jan 21 21:42:40 crc kubenswrapper[4860]: I0121 21:42:40.015472 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t8zjn_db3166f1-3c99-4217-859b-24835c6f1f1e/operator/0.log" Jan 21 21:42:40 crc kubenswrapper[4860]: I0121 21:42:40.029494 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qj2fs_6a4226f5-36cd-49b1-bbf3-2d13973b45b5/observability-ui-dashboards/0.log" Jan 21 21:42:40 crc kubenswrapper[4860]: I0121 21:42:40.049943 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mv2g7_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a/perses-operator/0.log" Jan 21 21:42:41 crc kubenswrapper[4860]: I0121 21:42:41.547186 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:41 crc kubenswrapper[4860]: I0121 21:42:41.547291 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:42 crc kubenswrapper[4860]: I0121 21:42:42.613242 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v5l9m" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="registry-server" probeResult="failure" output=< Jan 21 21:42:42 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:42:42 crc kubenswrapper[4860]: > Jan 21 21:42:45 crc kubenswrapper[4860]: I0121 21:42:45.808915 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt"] Jan 21 21:42:45 crc kubenswrapper[4860]: I0121 21:42:45.810277 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="gather" containerID="cri-o://08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" gracePeriod=2 Jan 21 21:42:45 crc kubenswrapper[4860]: I0121 21:42:45.810319 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="copy" containerID="cri-o://0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" gracePeriod=2 Jan 21 21:42:45 crc kubenswrapper[4860]: I0121 21:42:45.830430 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt"] Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.331915 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54-debug-788qt_e3813d0a-32c6-4eb4-8369-26f7ed4f377f/copy/0.log" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.332631 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482110 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq\") pod \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482207 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54-debug-788qt_e3813d0a-32c6-4eb4-8369-26f7ed4f377f/copy/0.log" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482392 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output\") pod \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\" (UID: \"e3813d0a-32c6-4eb4-8369-26f7ed4f377f\") " Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482637 4860 generic.go:334] "Generic (PLEG): container finished" podID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerID="0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" exitCode=143 Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482664 4860 generic.go:334] "Generic (PLEG): container finished" podID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerID="08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" exitCode=0 Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482713 4860 scope.go:117] "RemoveContainer" containerID="0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.482749 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54-debug-788qt" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.483070 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e3813d0a-32c6-4eb4-8369-26f7ed4f377f" (UID: "e3813d0a-32c6-4eb4-8369-26f7ed4f377f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.489284 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq" (OuterVolumeSpecName: "kube-api-access-hbvnq") pod "e3813d0a-32c6-4eb4-8369-26f7ed4f377f" (UID: "e3813d0a-32c6-4eb4-8369-26f7ed4f377f"). InnerVolumeSpecName "kube-api-access-hbvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.548867 4860 scope.go:117] "RemoveContainer" containerID="08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.572906 4860 scope.go:117] "RemoveContainer" containerID="0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" Jan 21 21:42:46 crc kubenswrapper[4860]: E0121 21:42:46.573599 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800\": container with ID starting with 0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800 not found: ID does not exist" containerID="0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.573642 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800"} err="failed to get container status \"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800\": rpc error: code = NotFound desc = could not find container \"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800\": container with ID starting with 0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800 not found: ID does not exist" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.573680 4860 scope.go:117] "RemoveContainer" containerID="08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" Jan 21 21:42:46 crc kubenswrapper[4860]: E0121 21:42:46.574311 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b\": container with ID starting with 08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b not found: ID does not exist" containerID="08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.574421 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b"} err="failed to get container status \"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b\": rpc error: code = NotFound desc = could not find container \"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b\": container with ID starting with 08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b not found: ID does not exist" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.574497 4860 scope.go:117] "RemoveContainer" containerID="0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.575072 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800"} err="failed to get container status \"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800\": rpc error: code = NotFound desc = could not find container \"0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800\": container with ID starting with 0b554acc8fde1e31815fb50e2acafb25f09a466d3b3716fcf84e35c30add6800 not found: ID does not exist" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.575161 4860 scope.go:117] "RemoveContainer" containerID="08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.575441 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b"} err="failed to get container status \"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b\": rpc error: code = NotFound desc = could not find container \"08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b\": container with ID starting with 08c077d32ebd112036b20feeb995129bf97a61941df57b7f129ebb0ba21be32b not found: ID does not exist" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.584228 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-kube-api-access-hbvnq\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.584268 4860 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3813d0a-32c6-4eb4-8369-26f7ed4f377f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:46 crc kubenswrapper[4860]: I0121 21:42:46.591028 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" path="/var/lib/kubelet/pods/e3813d0a-32c6-4eb4-8369-26f7ed4f377f/volumes" Jan 21 21:42:47 crc kubenswrapper[4860]: I0121 21:42:47.500402 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:42:47 crc kubenswrapper[4860]: I0121 21:42:47.509277 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:42:47 crc kubenswrapper[4860]: I0121 21:42:47.534632 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.787516 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.797075 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.806173 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.817091 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.831056 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.848230 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.863880 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.880249 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.909851 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.946106 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:42:48 crc kubenswrapper[4860]: I0121 21:42:48.962231 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:42:49 crc kubenswrapper[4860]: I0121 21:42:49.177591 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:42:49 crc kubenswrapper[4860]: I0121 21:42:49.189411 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:42:49 crc kubenswrapper[4860]: I0121 21:42:49.583214 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:42:51 crc kubenswrapper[4860]: I0121 21:42:51.605426 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:51 crc kubenswrapper[4860]: I0121 21:42:51.659846 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.209283 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.210852 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v5l9m" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="registry-server" containerID="cri-o://f489394d0e876bcb493c4aff3e70147ddcf3be9efe7b96ea1182b9768cc43862" gracePeriod=2 Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.633173 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerID="f489394d0e876bcb493c4aff3e70147ddcf3be9efe7b96ea1182b9768cc43862" exitCode=0 Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.634018 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerDied","Data":"f489394d0e876bcb493c4aff3e70147ddcf3be9efe7b96ea1182b9768cc43862"} Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.802866 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.889435 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddv8m\" (UniqueName: \"kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m\") pod \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.889528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities\") pod \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.889792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content\") pod \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\" (UID: \"d7297b78-7cf5-49ba-a319-76b8c126fe9c\") " Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.892442 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities" (OuterVolumeSpecName: "utilities") pod "d7297b78-7cf5-49ba-a319-76b8c126fe9c" (UID: "d7297b78-7cf5-49ba-a319-76b8c126fe9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.899965 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m" (OuterVolumeSpecName: "kube-api-access-ddv8m") pod "d7297b78-7cf5-49ba-a319-76b8c126fe9c" (UID: "d7297b78-7cf5-49ba-a319-76b8c126fe9c"). InnerVolumeSpecName "kube-api-access-ddv8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.992894 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddv8m\" (UniqueName: \"kubernetes.io/projected/d7297b78-7cf5-49ba-a319-76b8c126fe9c-kube-api-access-ddv8m\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:55 crc kubenswrapper[4860]: I0121 21:42:55.993455 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.034927 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7297b78-7cf5-49ba-a319-76b8c126fe9c" (UID: "d7297b78-7cf5-49ba-a319-76b8c126fe9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.095909 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7297b78-7cf5-49ba-a319-76b8c126fe9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.647861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5l9m" event={"ID":"d7297b78-7cf5-49ba-a319-76b8c126fe9c","Type":"ContainerDied","Data":"67303e99b2cd82fa8e18d5c5e853910636dde2fcb50c3c3779717ad95d9d798c"} Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.647959 4860 scope.go:117] "RemoveContainer" containerID="f489394d0e876bcb493c4aff3e70147ddcf3be9efe7b96ea1182b9768cc43862" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.647995 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5l9m" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.683017 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.684251 4860 scope.go:117] "RemoveContainer" containerID="1872c6ad4a70c90c2988c6facb23243d6b4aaaf88b8a0dad0bc8a40e7e516770" Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.693363 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v5l9m"] Jan 21 21:42:56 crc kubenswrapper[4860]: I0121 21:42:56.749014 4860 scope.go:117] "RemoveContainer" containerID="c2d484d9c7097ec703eaa74c8b629dee931f550ac9c1729b90b0b13befdb0baa" Jan 21 21:42:58 crc kubenswrapper[4860]: I0121 21:42:58.594916 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" path="/var/lib/kubelet/pods/d7297b78-7cf5-49ba-a319-76b8c126fe9c/volumes" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.162146 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l"] Jan 21 21:43:00 crc kubenswrapper[4860]: E0121 21:43:00.163044 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="gather" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163059 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="gather" Jan 21 21:43:00 crc kubenswrapper[4860]: E0121 21:43:00.163078 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="registry-server" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163084 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="registry-server" Jan 21 21:43:00 crc kubenswrapper[4860]: E0121 21:43:00.163097 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="extract-content" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163104 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="extract-content" Jan 21 21:43:00 crc kubenswrapper[4860]: E0121 21:43:00.163128 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="extract-utilities" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163134 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="extract-utilities" Jan 21 21:43:00 crc kubenswrapper[4860]: E0121 21:43:00.163146 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="copy" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163151 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="copy" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163306 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="gather" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163334 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3813d0a-32c6-4eb4-8369-26f7ed4f377f" containerName="copy" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.163344 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7297b78-7cf5-49ba-a319-76b8c126fe9c" containerName="registry-server" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.164140 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.168141 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.170712 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.181283 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l"] Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.204094 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.204276 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.204400 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.204438 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xd8\" (UniqueName: \"kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.305403 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.305470 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.305506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.305527 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xd8\" (UniqueName: \"kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.314533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.315498 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.316104 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.332207 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xd8\" (UniqueName: \"kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8\") pod \"watcher-kuttl-db-purge-29483863-84w5l\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.484614 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:00 crc kubenswrapper[4860]: I0121 21:43:00.993769 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l"] Jan 21 21:43:01 crc kubenswrapper[4860]: I0121 21:43:01.703002 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" event={"ID":"0ddb46b6-4d4f-4789-8960-477050afd2e5","Type":"ContainerStarted","Data":"76f40618846fffbff60e7351aee8f861d3dbbf86a88932e442822bfd25124eb9"} Jan 21 21:43:01 crc kubenswrapper[4860]: I0121 21:43:01.703082 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" event={"ID":"0ddb46b6-4d4f-4789-8960-477050afd2e5","Type":"ContainerStarted","Data":"fd57bf2eabc832a1f53725ba2fd188ca3da38a164500e4fa2330ed67f027885c"} Jan 21 21:43:01 crc kubenswrapper[4860]: I0121 21:43:01.734149 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" podStartSLOduration=1.734113858 podStartE2EDuration="1.734113858s" podCreationTimestamp="2026-01-21 21:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:43:01.730762547 +0000 UTC m=+2073.952941027" watchObservedRunningTime="2026-01-21 21:43:01.734113858 +0000 UTC m=+2073.956292328" Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.103849 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.104412 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.104499 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.105743 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.105835 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975" gracePeriod=600 Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.721981 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975" exitCode=0 Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.722084 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975"} Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.722968 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b"} Jan 21 21:43:02 crc kubenswrapper[4860]: I0121 21:43:02.723030 4860 scope.go:117] "RemoveContainer" containerID="c56e46672e59ff80aac4e70bc09639dc012d66de24119dba3b0d822b9bb08e97" Jan 21 21:43:04 crc kubenswrapper[4860]: I0121 21:43:04.752213 4860 generic.go:334] "Generic (PLEG): container finished" podID="0ddb46b6-4d4f-4789-8960-477050afd2e5" containerID="76f40618846fffbff60e7351aee8f861d3dbbf86a88932e442822bfd25124eb9" exitCode=0 Jan 21 21:43:04 crc kubenswrapper[4860]: I0121 21:43:04.752311 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" event={"ID":"0ddb46b6-4d4f-4789-8960-477050afd2e5","Type":"ContainerDied","Data":"76f40618846fffbff60e7351aee8f861d3dbbf86a88932e442822bfd25124eb9"} Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.145882 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.340048 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data\") pod \"0ddb46b6-4d4f-4789-8960-477050afd2e5\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.340160 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume\") pod \"0ddb46b6-4d4f-4789-8960-477050afd2e5\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.340429 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9xd8\" (UniqueName: \"kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8\") pod \"0ddb46b6-4d4f-4789-8960-477050afd2e5\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.340533 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle\") pod \"0ddb46b6-4d4f-4789-8960-477050afd2e5\" (UID: \"0ddb46b6-4d4f-4789-8960-477050afd2e5\") " Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.348670 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "0ddb46b6-4d4f-4789-8960-477050afd2e5" (UID: "0ddb46b6-4d4f-4789-8960-477050afd2e5"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.350063 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8" (OuterVolumeSpecName: "kube-api-access-b9xd8") pod "0ddb46b6-4d4f-4789-8960-477050afd2e5" (UID: "0ddb46b6-4d4f-4789-8960-477050afd2e5"). InnerVolumeSpecName "kube-api-access-b9xd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.368835 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ddb46b6-4d4f-4789-8960-477050afd2e5" (UID: "0ddb46b6-4d4f-4789-8960-477050afd2e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.400122 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data" (OuterVolumeSpecName: "config-data") pod "0ddb46b6-4d4f-4789-8960-477050afd2e5" (UID: "0ddb46b6-4d4f-4789-8960-477050afd2e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.443639 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.443695 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.443710 4860 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/0ddb46b6-4d4f-4789-8960-477050afd2e5-scripts-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.443726 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9xd8\" (UniqueName: \"kubernetes.io/projected/0ddb46b6-4d4f-4789-8960-477050afd2e5-kube-api-access-b9xd8\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.775128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" event={"ID":"0ddb46b6-4d4f-4789-8960-477050afd2e5","Type":"ContainerDied","Data":"fd57bf2eabc832a1f53725ba2fd188ca3da38a164500e4fa2330ed67f027885c"} Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.775728 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd57bf2eabc832a1f53725ba2fd188ca3da38a164500e4fa2330ed67f027885c" Jan 21 21:43:06 crc kubenswrapper[4860]: I0121 21:43:06.775260 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.533442 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r84fd"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.606543 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r84fd"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.606602 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.613867 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29483863-84w5l"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.634166 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-v2z2t"] Jan 21 21:43:08 crc kubenswrapper[4860]: E0121 21:43:08.634855 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ddb46b6-4d4f-4789-8960-477050afd2e5" containerName="watcher-db-manage" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.634890 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ddb46b6-4d4f-4789-8960-477050afd2e5" containerName="watcher-db-manage" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.635118 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ddb46b6-4d4f-4789-8960-477050afd2e5" containerName="watcher-db-manage" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.636157 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.647855 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-v2z2t"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.711596 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.711914 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" containerName="watcher-applier" containerID="cri-o://fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.729663 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.730022 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-kuttl-api-log" containerID="cri-o://de32ca698c806ee639bb8c516aa05f1087d9269904df3178759a12425124c204" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.730570 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-api" containerID="cri-o://53b34565402eacfd3146739d1f8bc81867be6a0dca80b39f007324b55dd7aa0d" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.777590 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.777897 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-kuttl-api-log" containerID="cri-o://9bea35e633708a43b7ae34b6b12ed92eefb43980b3f5ba60bd4d9751ee45c048" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.778451 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-api" containerID="cri-o://ae5bc36d0e71c63a0966731d8fbab3ef5e458b5496302fdc73280ad1779bc116" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.789101 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.789201 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htr5g\" (UniqueName: \"kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.793031 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.793363 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" containerName="watcher-decision-engine" containerID="cri-o://5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de" gracePeriod=30 Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.891599 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.891718 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htr5g\" (UniqueName: \"kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.893132 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.933794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htr5g\" (UniqueName: \"kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g\") pod \"watchertest-account-delete-v2z2t\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:08 crc kubenswrapper[4860]: I0121 21:43:08.972379 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.615568 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-v2z2t"] Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.834568 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" event={"ID":"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0","Type":"ContainerStarted","Data":"86af1660cb582df6437eb6acf1132005873c2cdf63cc2daaa581babefae63e04"} Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.834621 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" event={"ID":"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0","Type":"ContainerStarted","Data":"5960f16f92737dcab37002e0d6c3eced3b533b9fbbd43d995fe28519e7306fc2"} Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.837481 4860 generic.go:334] "Generic (PLEG): container finished" podID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerID="de32ca698c806ee639bb8c516aa05f1087d9269904df3178759a12425124c204" exitCode=143 Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.837536 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerDied","Data":"de32ca698c806ee639bb8c516aa05f1087d9269904df3178759a12425124c204"} Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.840225 4860 generic.go:334] "Generic (PLEG): container finished" podID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerID="9bea35e633708a43b7ae34b6b12ed92eefb43980b3f5ba60bd4d9751ee45c048" exitCode=143 Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.840286 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerDied","Data":"9bea35e633708a43b7ae34b6b12ed92eefb43980b3f5ba60bd4d9751ee45c048"} Jan 21 21:43:09 crc kubenswrapper[4860]: I0121 21:43:09.854047 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" podStartSLOduration=1.8539243829999998 podStartE2EDuration="1.853924383s" podCreationTimestamp="2026-01-21 21:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:43:09.848451827 +0000 UTC m=+2082.070630297" watchObservedRunningTime="2026-01-21 21:43:09.853924383 +0000 UTC m=+2082.076102853" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.592978 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ddb46b6-4d4f-4789-8960-477050afd2e5" path="/var/lib/kubelet/pods/0ddb46b6-4d4f-4789-8960-477050afd2e5/volumes" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.593902 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06f822b-9fe0-4619-9346-0404f0ab0210" path="/var/lib/kubelet/pods/f06f822b-9fe0-4619-9346-0404f0ab0210/volumes" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.638245 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.225:9322/\": dial tcp 10.217.0.225:9322: connect: connection refused" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.638243 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.225:9322/\": dial tcp 10.217.0.225:9322: connect: connection refused" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.653469 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.226:9322/\": dial tcp 10.217.0.226:9322: connect: connection refused" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.653594 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.226:9322/\": dial tcp 10.217.0.226:9322: connect: connection refused" Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.866649 4860 generic.go:334] "Generic (PLEG): container finished" podID="5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" containerID="86af1660cb582df6437eb6acf1132005873c2cdf63cc2daaa581babefae63e04" exitCode=0 Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.867014 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" event={"ID":"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0","Type":"ContainerDied","Data":"86af1660cb582df6437eb6acf1132005873c2cdf63cc2daaa581babefae63e04"} Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.876903 4860 generic.go:334] "Generic (PLEG): container finished" podID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerID="53b34565402eacfd3146739d1f8bc81867be6a0dca80b39f007324b55dd7aa0d" exitCode=0 Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.877088 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerDied","Data":"53b34565402eacfd3146739d1f8bc81867be6a0dca80b39f007324b55dd7aa0d"} Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.881023 4860 generic.go:334] "Generic (PLEG): container finished" podID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerID="ae5bc36d0e71c63a0966731d8fbab3ef5e458b5496302fdc73280ad1779bc116" exitCode=0 Jan 21 21:43:10 crc kubenswrapper[4860]: I0121 21:43:10.881071 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerDied","Data":"ae5bc36d0e71c63a0966731d8fbab3ef5e458b5496302fdc73280ad1779bc116"} Jan 21 21:43:10 crc kubenswrapper[4860]: E0121 21:43:10.980985 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:43:10 crc kubenswrapper[4860]: E0121 21:43:10.987433 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:43:10 crc kubenswrapper[4860]: E0121 21:43:10.991288 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 21 21:43:10 crc kubenswrapper[4860]: E0121 21:43:10.991404 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" containerName="watcher-applier" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.134263 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.141914 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146026 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146168 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146224 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146262 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146373 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146412 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146472 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d27bt\" (UniqueName: \"kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146501 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnlrp\" (UniqueName: \"kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146524 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data\") pod \"c07cb085-cf53-46c9-bc02-04be321dd57e\" (UID: \"c07cb085-cf53-46c9-bc02-04be321dd57e\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.146559 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca\") pod \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\" (UID: \"3f04d464-3d71-4581-bf35-3e19f06eaeb2\") " Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.148596 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs" (OuterVolumeSpecName: "logs") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.148869 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs" (OuterVolumeSpecName: "logs") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.154404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp" (OuterVolumeSpecName: "kube-api-access-fnlrp") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "kube-api-access-fnlrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.155128 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt" (OuterVolumeSpecName: "kube-api-access-d27bt") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "kube-api-access-d27bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.202158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.217585 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.234639 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250213 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d27bt\" (UniqueName: \"kubernetes.io/projected/c07cb085-cf53-46c9-bc02-04be321dd57e-kube-api-access-d27bt\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250244 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnlrp\" (UniqueName: \"kubernetes.io/projected/3f04d464-3d71-4581-bf35-3e19f06eaeb2-kube-api-access-fnlrp\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250253 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250262 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250271 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f04d464-3d71-4581-bf35-3e19f06eaeb2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250281 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.250289 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07cb085-cf53-46c9-bc02-04be321dd57e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.262951 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.270237 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data" (OuterVolumeSpecName: "config-data") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.307039 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.309722 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c07cb085-cf53-46c9-bc02-04be321dd57e" (UID: "c07cb085-cf53-46c9-bc02-04be321dd57e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.315797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data" (OuterVolumeSpecName: "config-data") pod "3f04d464-3d71-4581-bf35-3e19f06eaeb2" (UID: "3f04d464-3d71-4581-bf35-3e19f06eaeb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.352473 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.352521 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.352531 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.352548 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c07cb085-cf53-46c9-bc02-04be321dd57e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.352559 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f04d464-3d71-4581-bf35-3e19f06eaeb2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.916714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"3f04d464-3d71-4581-bf35-3e19f06eaeb2","Type":"ContainerDied","Data":"bb90f33e48d22cfc9d185422ec888d5687066aa1aeb113293ebe1f6d501eb642"} Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.916806 4860 scope.go:117] "RemoveContainer" containerID="53b34565402eacfd3146739d1f8bc81867be6a0dca80b39f007324b55dd7aa0d" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.917145 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.930997 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.931078 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c07cb085-cf53-46c9-bc02-04be321dd57e","Type":"ContainerDied","Data":"f85a4f57a62b8b00d99173513c94889c21e73feb63c95fa755024792f1e3113e"} Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.978186 4860 scope.go:117] "RemoveContainer" containerID="de32ca698c806ee639bb8c516aa05f1087d9269904df3178759a12425124c204" Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.990644 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:43:11 crc kubenswrapper[4860]: I0121 21:43:11.993069 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.018036 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.027224 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.058898 4860 scope.go:117] "RemoveContainer" containerID="ae5bc36d0e71c63a0966731d8fbab3ef5e458b5496302fdc73280ad1779bc116" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.107417 4860 scope.go:117] "RemoveContainer" containerID="9bea35e633708a43b7ae34b6b12ed92eefb43980b3f5ba60bd4d9751ee45c048" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.319662 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.352060 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.352467 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-central-agent" containerID="cri-o://71d5a7bf33d6f2cf7017920afe20403cb5753c87d57c58f84953f3d3ff7ae0c9" gracePeriod=30 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.352523 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="proxy-httpd" containerID="cri-o://96080baf45b6d4c90048eda95f7a144f287611e6c577d4feb326064863ffd4bd" gracePeriod=30 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.352637 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="sg-core" containerID="cri-o://4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d" gracePeriod=30 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.352686 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-notification-agent" containerID="cri-o://b7cb7644788f0bceef302fcaf16abd212555cc88959fc2c28351e514187b1764" gracePeriod=30 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.374985 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htr5g\" (UniqueName: \"kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g\") pod \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.375059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts\") pod \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\" (UID: \"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.388098 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" (UID: "5fd4b380-3d3e-40c3-a383-93d1cd09e7f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.422733 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g" (OuterVolumeSpecName: "kube-api-access-htr5g") pod "5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" (UID: "5fd4b380-3d3e-40c3-a383-93d1cd09e7f0"). InnerVolumeSpecName "kube-api-access-htr5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.478624 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htr5g\" (UniqueName: \"kubernetes.io/projected/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-kube-api-access-htr5g\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.478738 4860 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.611606 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" path="/var/lib/kubelet/pods/3f04d464-3d71-4581-bf35-3e19f06eaeb2/volumes" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.614875 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" path="/var/lib/kubelet/pods/c07cb085-cf53-46c9-bc02-04be321dd57e/volumes" Jan 21 21:43:12 crc kubenswrapper[4860]: E0121 21:43:12.728173 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1913aa9d_f183_4d88_b640_6b2be407a629.slice/crio-fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc641b1_70ed_4718_a49c_beb8a40bfc4f.slice/crio-conmon-4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d.scope\": RecentStats: unable to find data in memory cache]" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.943887 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.945473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" event={"ID":"5fd4b380-3d3e-40c3-a383-93d1cd09e7f0","Type":"ContainerDied","Data":"5960f16f92737dcab37002e0d6c3eced3b533b9fbbd43d995fe28519e7306fc2"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.945515 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5960f16f92737dcab37002e0d6c3eced3b533b9fbbd43d995fe28519e7306fc2" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.945573 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-v2z2t" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.953762 4860 generic.go:334] "Generic (PLEG): container finished" podID="1913aa9d-f183-4d88-b640-6b2be407a629" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" exitCode=0 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.953849 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1913aa9d-f183-4d88-b640-6b2be407a629","Type":"ContainerDied","Data":"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.953877 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1913aa9d-f183-4d88-b640-6b2be407a629","Type":"ContainerDied","Data":"b6424811b48903c42469561597aae18066a90be011d0992cb5b6f150064f4161"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.953896 4860 scope.go:117] "RemoveContainer" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.954009 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961204 4860 generic.go:334] "Generic (PLEG): container finished" podID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerID="96080baf45b6d4c90048eda95f7a144f287611e6c577d4feb326064863ffd4bd" exitCode=0 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961231 4860 generic.go:334] "Generic (PLEG): container finished" podID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerID="4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d" exitCode=2 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961240 4860 generic.go:334] "Generic (PLEG): container finished" podID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerID="71d5a7bf33d6f2cf7017920afe20403cb5753c87d57c58f84953f3d3ff7ae0c9" exitCode=0 Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961261 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerDied","Data":"96080baf45b6d4c90048eda95f7a144f287611e6c577d4feb326064863ffd4bd"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961290 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerDied","Data":"4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.961305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerDied","Data":"71d5a7bf33d6f2cf7017920afe20403cb5753c87d57c58f84953f3d3ff7ae0c9"} Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.991802 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpb6p\" (UniqueName: \"kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p\") pod \"1913aa9d-f183-4d88-b640-6b2be407a629\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.991909 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data\") pod \"1913aa9d-f183-4d88-b640-6b2be407a629\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.991961 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls\") pod \"1913aa9d-f183-4d88-b640-6b2be407a629\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.992035 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs\") pod \"1913aa9d-f183-4d88-b640-6b2be407a629\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " Jan 21 21:43:12 crc kubenswrapper[4860]: I0121 21:43:12.992087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle\") pod \"1913aa9d-f183-4d88-b640-6b2be407a629\" (UID: \"1913aa9d-f183-4d88-b640-6b2be407a629\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.001482 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs" (OuterVolumeSpecName: "logs") pod "1913aa9d-f183-4d88-b640-6b2be407a629" (UID: "1913aa9d-f183-4d88-b640-6b2be407a629"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.007280 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p" (OuterVolumeSpecName: "kube-api-access-kpb6p") pod "1913aa9d-f183-4d88-b640-6b2be407a629" (UID: "1913aa9d-f183-4d88-b640-6b2be407a629"). InnerVolumeSpecName "kube-api-access-kpb6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.018236 4860 scope.go:117] "RemoveContainer" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" Jan 21 21:43:13 crc kubenswrapper[4860]: E0121 21:43:13.026232 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b\": container with ID starting with fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b not found: ID does not exist" containerID="fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.026335 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b"} err="failed to get container status \"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b\": rpc error: code = NotFound desc = could not find container \"fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b\": container with ID starting with fe8ed0ec71cf2eae92e916d313f0fa9dace9d03e7beed586d4e9e0ab3ed3980b not found: ID does not exist" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.046841 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1913aa9d-f183-4d88-b640-6b2be407a629" (UID: "1913aa9d-f183-4d88-b640-6b2be407a629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.064025 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data" (OuterVolumeSpecName: "config-data") pod "1913aa9d-f183-4d88-b640-6b2be407a629" (UID: "1913aa9d-f183-4d88-b640-6b2be407a629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.094004 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1913aa9d-f183-4d88-b640-6b2be407a629-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.094050 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.094066 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpb6p\" (UniqueName: \"kubernetes.io/projected/1913aa9d-f183-4d88-b640-6b2be407a629-kube-api-access-kpb6p\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.094079 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.106071 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1913aa9d-f183-4d88-b640-6b2be407a629" (UID: "1913aa9d-f183-4d88-b640-6b2be407a629"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.196428 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1913aa9d-f183-4d88-b640-6b2be407a629-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.297385 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.313278 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.640344 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mh86c"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.651673 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mh86c"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.683851 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-hdjlt"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.686873 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-v2z2t"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.696121 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-v2z2t"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.705728 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-hdjlt"] Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.795127 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.912983 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.913046 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbpf\" (UniqueName: \"kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.913073 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.913113 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.913161 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.913182 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls\") pod \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\" (UID: \"8731e357-3b33-4bc0-8f0b-3f69dc31b93f\") " Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.917231 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs" (OuterVolumeSpecName: "logs") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.922785 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf" (OuterVolumeSpecName: "kube-api-access-qqbpf") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "kube-api-access-qqbpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.971031 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.981087 4860 generic.go:334] "Generic (PLEG): container finished" podID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" containerID="5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de" exitCode=0 Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.981152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8731e357-3b33-4bc0-8f0b-3f69dc31b93f","Type":"ContainerDied","Data":"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de"} Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.981183 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8731e357-3b33-4bc0-8f0b-3f69dc31b93f","Type":"ContainerDied","Data":"b787c602b6faf7797c406649f7b5b0722ebea56f64bdddc8c74c8f6e8f2e4a87"} Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.981204 4860 scope.go:117] "RemoveContainer" containerID="5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de" Jan 21 21:43:13 crc kubenswrapper[4860]: I0121 21:43:13.981430 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.016912 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbpf\" (UniqueName: \"kubernetes.io/projected/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-kube-api-access-qqbpf\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.016958 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.016969 4860 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.018854 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.023213 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data" (OuterVolumeSpecName: "config-data") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.027453 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8731e357-3b33-4bc0-8f0b-3f69dc31b93f" (UID: "8731e357-3b33-4bc0-8f0b-3f69dc31b93f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.090174 4860 scope.go:117] "RemoveContainer" containerID="5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de" Jan 21 21:43:14 crc kubenswrapper[4860]: E0121 21:43:14.093030 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de\": container with ID starting with 5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de not found: ID does not exist" containerID="5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.093094 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de"} err="failed to get container status \"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de\": rpc error: code = NotFound desc = could not find container \"5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de\": container with ID starting with 5ddd3d78c0381baf3783688203e5c312b6ebf248c5e27942b7d76b9592e887de not found: ID does not exist" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.118874 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.118918 4860 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.118957 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8731e357-3b33-4bc0-8f0b-3f69dc31b93f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.324327 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.333609 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.591999 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" path="/var/lib/kubelet/pods/1913aa9d-f183-4d88-b640-6b2be407a629/volumes" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.592784 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" path="/var/lib/kubelet/pods/5fd4b380-3d3e-40c3-a383-93d1cd09e7f0/volumes" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.593563 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" path="/var/lib/kubelet/pods/8731e357-3b33-4bc0-8f0b-3f69dc31b93f/volumes" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.595547 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff" path="/var/lib/kubelet/pods/faa83b7a-ce4f-4bbe-95d8-7a0eda6f07ff/volumes" Jan 21 21:43:14 crc kubenswrapper[4860]: I0121 21:43:14.596119 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7" path="/var/lib/kubelet/pods/fe6b5ccd-c2f2-4a25-bb24-7e5969f778e7/volumes" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.038990 4860 generic.go:334] "Generic (PLEG): container finished" podID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerID="b7cb7644788f0bceef302fcaf16abd212555cc88959fc2c28351e514187b1764" exitCode=0 Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.039208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerDied","Data":"b7cb7644788f0bceef302fcaf16abd212555cc88959fc2c28351e514187b1764"} Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.398768 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.564628 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.564735 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.564783 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.564806 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.564854 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.565037 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.565108 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxxg9\" (UniqueName: \"kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.565142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd\") pod \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\" (UID: \"fdc641b1-70ed-4718-a49c-beb8a40bfc4f\") " Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.566563 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.567080 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.582408 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9" (OuterVolumeSpecName: "kube-api-access-hxxg9") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "kube-api-access-hxxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.596160 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts" (OuterVolumeSpecName: "scripts") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.673530 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxxg9\" (UniqueName: \"kubernetes.io/projected/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-kube-api-access-hxxg9\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.674002 4860 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.674118 4860 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.674202 4860 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.742212 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.743272 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.767038 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.776719 4860 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.776767 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.776779 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.792710 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data" (OuterVolumeSpecName: "config-data") pod "fdc641b1-70ed-4718-a49c-beb8a40bfc4f" (UID: "fdc641b1-70ed-4718-a49c-beb8a40bfc4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:43:16 crc kubenswrapper[4860]: I0121 21:43:16.878381 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc641b1-70ed-4718-a49c-beb8a40bfc4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.057255 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fdc641b1-70ed-4718-a49c-beb8a40bfc4f","Type":"ContainerDied","Data":"f5895a15ccf6211ff4955d0f8ab6b68521c28fd6170344c1b20cab0c7f399e03"} Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.057334 4860 scope.go:117] "RemoveContainer" containerID="96080baf45b6d4c90048eda95f7a144f287611e6c577d4feb326064863ffd4bd" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.057356 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.087822 4860 scope.go:117] "RemoveContainer" containerID="4503e59c9921275b7098ca860022c11c3093fd54ca442274de735d5314474f9d" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.111647 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.114086 4860 scope.go:117] "RemoveContainer" containerID="b7cb7644788f0bceef302fcaf16abd212555cc88959fc2c28351e514187b1764" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.122823 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160286 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160748 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="proxy-httpd" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160772 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="proxy-httpd" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160800 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="sg-core" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160809 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="sg-core" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160818 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160829 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160848 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160855 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160868 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-central-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160875 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-central-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160887 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160895 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160903 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" containerName="mariadb-account-delete" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160910 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" containerName="mariadb-account-delete" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160920 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" containerName="watcher-decision-engine" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160926 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" containerName="watcher-decision-engine" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160956 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" containerName="watcher-applier" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160967 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" containerName="watcher-applier" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.160989 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-notification-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.160996 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-notification-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: E0121 21:43:17.161009 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161014 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161193 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8731e357-3b33-4bc0-8f0b-3f69dc31b93f" containerName="watcher-decision-engine" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161212 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161225 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="proxy-httpd" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161240 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161256 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-central-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161270 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07cb085-cf53-46c9-bc02-04be321dd57e" containerName="watcher-kuttl-api-log" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161282 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="sg-core" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161293 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd4b380-3d3e-40c3-a383-93d1cd09e7f0" containerName="mariadb-account-delete" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161304 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" containerName="ceilometer-notification-agent" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161317 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1913aa9d-f183-4d88-b640-6b2be407a629" containerName="watcher-applier" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.161327 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f04d464-3d71-4581-bf35-3e19f06eaeb2" containerName="watcher-api" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.168320 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.173831 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.174585 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.175103 4860 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.191168 4860 scope.go:117] "RemoveContainer" containerID="71d5a7bf33d6f2cf7017920afe20403cb5753c87d57c58f84953f3d3ff7ae0c9" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.203561 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286455 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-scripts\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286548 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhzr\" (UniqueName: \"kubernetes.io/projected/05c42c02-4391-4c36-932c-dc0f3cdb80d7-kube-api-access-4nhzr\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286584 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-run-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286635 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-config-data\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286661 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-log-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.286894 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.287203 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.287249 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389305 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-log-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389373 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389431 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389499 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389562 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-scripts\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389611 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhzr\" (UniqueName: \"kubernetes.io/projected/05c42c02-4391-4c36-932c-dc0f3cdb80d7-kube-api-access-4nhzr\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389637 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-run-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.389690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-config-data\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.391216 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-run-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.391219 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c42c02-4391-4c36-932c-dc0f3cdb80d7-log-httpd\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.396763 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.398667 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-scripts\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.399846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-config-data\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.415280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhzr\" (UniqueName: \"kubernetes.io/projected/05c42c02-4391-4c36-932c-dc0f3cdb80d7-kube-api-access-4nhzr\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.417926 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.419653 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c42c02-4391-4c36-932c-dc0f3cdb80d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05c42c02-4391-4c36-932c-dc0f3cdb80d7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.487519 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:17 crc kubenswrapper[4860]: I0121 21:43:17.986667 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 21 21:43:18 crc kubenswrapper[4860]: I0121 21:43:18.068306 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05c42c02-4391-4c36-932c-dc0f3cdb80d7","Type":"ContainerStarted","Data":"4b61d75e601c101f08d8df3c72a6642609f2a80765efe1ae5f9434ae6313c92b"} Jan 21 21:43:18 crc kubenswrapper[4860]: I0121 21:43:18.590107 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc641b1-70ed-4718-a49c-beb8a40bfc4f" path="/var/lib/kubelet/pods/fdc641b1-70ed-4718-a49c-beb8a40bfc4f/volumes" Jan 21 21:43:19 crc kubenswrapper[4860]: I0121 21:43:19.086316 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05c42c02-4391-4c36-932c-dc0f3cdb80d7","Type":"ContainerStarted","Data":"870ca69b0beff19621d3e10f9d2e595a17f031729368a707e938bcd12a982768"} Jan 21 21:43:20 crc kubenswrapper[4860]: I0121 21:43:20.105367 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05c42c02-4391-4c36-932c-dc0f3cdb80d7","Type":"ContainerStarted","Data":"a8bab6ebdff8d437af3249ebf8e486d2b07de131e1b300735a63b4a09405bbb3"} Jan 21 21:43:21 crc kubenswrapper[4860]: I0121 21:43:21.119012 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05c42c02-4391-4c36-932c-dc0f3cdb80d7","Type":"ContainerStarted","Data":"cee54607f32dc9d6ef94e1fe98a2ead6f7efc86ff92c4bfdf49bb50b28bfaa36"} Jan 21 21:43:22 crc kubenswrapper[4860]: I0121 21:43:22.134040 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05c42c02-4391-4c36-932c-dc0f3cdb80d7","Type":"ContainerStarted","Data":"8f944c29b985dd658a7c73133eaf18e575d69f7636f65e93645c9adcb738183f"} Jan 21 21:43:22 crc kubenswrapper[4860]: I0121 21:43:22.134521 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:22 crc kubenswrapper[4860]: I0121 21:43:22.163887 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7726497700000001 podStartE2EDuration="5.163858204s" podCreationTimestamp="2026-01-21 21:43:17 +0000 UTC" firstStartedPulling="2026-01-21 21:43:18.001128683 +0000 UTC m=+2090.223307153" lastFinishedPulling="2026-01-21 21:43:21.392337117 +0000 UTC m=+2093.614515587" observedRunningTime="2026-01-21 21:43:22.161641487 +0000 UTC m=+2094.383819957" watchObservedRunningTime="2026-01-21 21:43:22.163858204 +0000 UTC m=+2094.386036684" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.212167 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cfk7p/must-gather-rrm5z"] Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.214454 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.221805 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cfk7p"/"kube-root-ca.crt" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.222221 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cfk7p"/"openshift-service-ca.crt" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.233514 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cfk7p/must-gather-rrm5z"] Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.336043 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e27f7cf-a2f7-4552-8d62-88945d618163-must-gather-output\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.336173 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6kg\" (UniqueName: \"kubernetes.io/projected/9e27f7cf-a2f7-4552-8d62-88945d618163-kube-api-access-cx6kg\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.412354 4860 scope.go:117] "RemoveContainer" containerID="49150c25601e8eeee59a0c099f7b71262d286ef400b00c316a4aa556a05f68da" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.437682 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e27f7cf-a2f7-4552-8d62-88945d618163-must-gather-output\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.437796 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6kg\" (UniqueName: \"kubernetes.io/projected/9e27f7cf-a2f7-4552-8d62-88945d618163-kube-api-access-cx6kg\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.438574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e27f7cf-a2f7-4552-8d62-88945d618163-must-gather-output\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.465732 4860 scope.go:117] "RemoveContainer" containerID="8c949e0c7c73efdc0596e3edfa97e23f73af87b4f52efa4d7ace2c8b451b7bd1" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.479673 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6kg\" (UniqueName: \"kubernetes.io/projected/9e27f7cf-a2f7-4552-8d62-88945d618163-kube-api-access-cx6kg\") pod \"must-gather-rrm5z\" (UID: \"9e27f7cf-a2f7-4552-8d62-88945d618163\") " pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.528593 4860 scope.go:117] "RemoveContainer" containerID="140a928455b671e1ad23d527064e7121e2bbe20c4b276eb550b740dfe6625f90" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.534869 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.587186 4860 scope.go:117] "RemoveContainer" containerID="0ae997911b4b037d32b7b2e4f42d51c116bfaef8382c0f6446afd3181084e9f4" Jan 21 21:43:39 crc kubenswrapper[4860]: I0121 21:43:39.650295 4860 scope.go:117] "RemoveContainer" containerID="ef6b20df4e06c4af3291b9f66b14c808adfecf2e7159dddada94cba8f1aa798e" Jan 21 21:43:40 crc kubenswrapper[4860]: I0121 21:43:40.130160 4860 scope.go:117] "RemoveContainer" containerID="c2052f7dbcf0ab1adecbbc288beb9075af7a81e075f332b7159a2c55cb03a091" Jan 21 21:43:40 crc kubenswrapper[4860]: I0121 21:43:40.228194 4860 scope.go:117] "RemoveContainer" containerID="2873a1e236edd2c5e97ea43c6121f7a5b206043c9c00c401d62d58dcc42b50db" Jan 21 21:43:40 crc kubenswrapper[4860]: I0121 21:43:40.602881 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cfk7p/must-gather-rrm5z"] Jan 21 21:43:41 crc kubenswrapper[4860]: I0121 21:43:41.503683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" event={"ID":"9e27f7cf-a2f7-4552-8d62-88945d618163","Type":"ContainerStarted","Data":"dce8e191748795cfa9ba031a8f0b7b58e63f6a742e0102663d4358eb96bf5b82"} Jan 21 21:43:41 crc kubenswrapper[4860]: I0121 21:43:41.504228 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" event={"ID":"9e27f7cf-a2f7-4552-8d62-88945d618163","Type":"ContainerStarted","Data":"072237814d418bc6b3c9be70fe24cc6c766b1bb23689ebfcfe54ab159ae01f32"} Jan 21 21:43:41 crc kubenswrapper[4860]: I0121 21:43:41.504246 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" event={"ID":"9e27f7cf-a2f7-4552-8d62-88945d618163","Type":"ContainerStarted","Data":"36e2cf20b115001b86560241079cd97277c9f24a961bf798f02d2a052f003a4f"} Jan 21 21:43:41 crc kubenswrapper[4860]: I0121 21:43:41.526846 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cfk7p/must-gather-rrm5z" podStartSLOduration=2.5267889070000003 podStartE2EDuration="2.526788907s" podCreationTimestamp="2026-01-21 21:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:43:41.521462175 +0000 UTC m=+2113.743640645" watchObservedRunningTime="2026-01-21 21:43:41.526788907 +0000 UTC m=+2113.748967377" Jan 21 21:43:47 crc kubenswrapper[4860]: I0121 21:43:47.503494 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.453855 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q67c7_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f/prometheus-operator/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.480818 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_a1ce9223-1adf-48f8-a0bf-31ce28e5719f/prometheus-operator-admission-webhook/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.537192 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_b2f8b6ee-0b46-4492-ae99-aea050eed563/prometheus-operator-admission-webhook/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.575290 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t8zjn_db3166f1-3c99-4217-859b-24835c6f1f1e/operator/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.594215 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qj2fs_6a4226f5-36cd-49b1-bbf3-2d13973b45b5/observability-ui-dashboards/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.614596 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mv2g7_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a/perses-operator/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.787165 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.805683 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:43:49 crc kubenswrapper[4860]: I0121 21:43:49.843407 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.278372 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.297286 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.308435 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.321036 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.328783 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.330300 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.358687 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.384632 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.402612 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.414749 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.424562 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.454407 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.474978 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.491157 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.504283 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.738401 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:43:51 crc kubenswrapper[4860]: I0121 21:43:51.759859 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.779548 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.802825 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.859965 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.901519 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.937428 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.953346 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:43:52 crc kubenswrapper[4860]: I0121 21:43:52.981701 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.747041 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.783961 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.798631 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.816394 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.831716 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.858599 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.877785 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.965318 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.977025 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:43:53 crc kubenswrapper[4860]: I0121 21:43:53.992376 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.010145 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.022923 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.028532 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.035776 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.052321 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.110580 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.130003 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.191617 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.213507 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.456249 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:43:54 crc kubenswrapper[4860]: I0121 21:43:54.469069 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:43:55 crc kubenswrapper[4860]: I0121 21:43:55.193239 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:43:55 crc kubenswrapper[4860]: I0121 21:43:55.204588 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:43:56 crc kubenswrapper[4860]: I0121 21:43:56.326098 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:43:56 crc kubenswrapper[4860]: I0121 21:43:56.354422 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:43:56 crc kubenswrapper[4860]: I0121 21:43:56.474866 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.324963 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-82rm8_b6c5b0be-96f9-4141-a721-54ca98a89d93/nmstate-console-plugin/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.349435 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66jdw_4ccac8fa-d2c8-4110-9bd4-78a6340612f9/nmstate-handler/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.351217 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4x452_70aea1b0-13b2-43ee-a77d-10c3143e4a95/control-plane-machine-set-operator/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.373995 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/kube-rbac-proxy/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.374219 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/nmstate-metrics/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.387337 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/kube-rbac-proxy/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.387475 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/machine-api-operator/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.406542 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tpllw_5f9bf17c-9142-474a-8a94-7e8cc90702f0/nmstate-operator/0.log" Jan 21 21:43:57 crc kubenswrapper[4860]: I0121 21:43:57.422767 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wnc66_cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d/nmstate-webhook/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.429877 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.437475 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.449873 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.465904 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.512400 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.531903 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.553992 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.567883 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.588220 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.607049 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.631726 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.647392 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.876982 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:43:58 crc kubenswrapper[4860]: I0121 21:43:58.891475 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.044990 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.059997 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.107671 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.127037 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.148902 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.168081 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.204047 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.831484 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.842248 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.858693 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.882395 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.902132 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:43:59 crc kubenswrapper[4860]: I0121 21:43:59.921104 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:44:00 crc kubenswrapper[4860]: I0121 21:44:00.216750 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:44:00 crc kubenswrapper[4860]: I0121 21:44:00.248556 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:44:00 crc kubenswrapper[4860]: I0121 21:44:00.945679 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:44:00 crc kubenswrapper[4860]: I0121 21:44:00.959444 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.235648 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/kube-multus-additional-cni-plugins/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.245735 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/egress-router-binary-copy/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.256235 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/cni-plugins/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.269360 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/bond-cni-plugin/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.288258 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/routeoverride-cni/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.298843 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/whereabouts-cni-bincopy/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.308809 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/whereabouts-cni/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.333872 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lcbjc_2e29e04b-89f7-4d77-8e17-0355493a1d9f/multus-admission-controller/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.341085 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lcbjc_2e29e04b-89f7-4d77-8e17-0355493a1d9f/kube-rbac-proxy/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.399855 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/2.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.421645 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/3.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.463203 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rrwcr_60ae05da-3403-4a2f-92f4-2ffa574a65a8/network-metrics-daemon/0.log" Jan 21 21:44:03 crc kubenswrapper[4860]: I0121 21:44:03.476217 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rrwcr_60ae05da-3403-4a2f-92f4-2ffa574a65a8/kube-rbac-proxy/0.log" Jan 21 21:44:10 crc kubenswrapper[4860]: I0121 21:44:10.541437 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:44:10 crc kubenswrapper[4860]: I0121 21:44:10.559127 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:44:10 crc kubenswrapper[4860]: I0121 21:44:10.588220 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.405705 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.422352 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.429600 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.441429 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.468637 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.478401 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.493213 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.506115 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.522080 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.559426 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.594637 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:44:12 crc kubenswrapper[4860]: I0121 21:44:12.928451 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:44:13 crc kubenswrapper[4860]: I0121 21:44:13.077164 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.550738 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.564742 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.577004 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.601960 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.654966 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.671164 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.681608 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.692887 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.705949 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.722064 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.736137 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.749648 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.963492 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:44:16 crc kubenswrapper[4860]: I0121 21:44:16.979399 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.108691 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.121548 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.158546 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.167902 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.187076 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.196927 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.214137 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.763175 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.843819 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.865844 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.879557 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.909135 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:44:17 crc kubenswrapper[4860]: I0121 21:44:17.920981 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:44:18 crc kubenswrapper[4860]: I0121 21:44:18.216593 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:44:18 crc kubenswrapper[4860]: I0121 21:44:18.227984 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:44:18 crc kubenswrapper[4860]: I0121 21:44:18.891675 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:44:18 crc kubenswrapper[4860]: I0121 21:44:18.908110 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:44:25 crc kubenswrapper[4860]: I0121 21:44:25.987122 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4x452_70aea1b0-13b2-43ee-a77d-10c3143e4a95/control-plane-machine-set-operator/0.log" Jan 21 21:44:26 crc kubenswrapper[4860]: I0121 21:44:26.005075 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/kube-rbac-proxy/0.log" Jan 21 21:44:26 crc kubenswrapper[4860]: I0121 21:44:26.022337 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/machine-api-operator/0.log" Jan 21 21:44:34 crc kubenswrapper[4860]: I0121 21:44:34.014165 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:44:34 crc kubenswrapper[4860]: I0121 21:44:34.033340 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:44:34 crc kubenswrapper[4860]: I0121 21:44:34.053855 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:44:40 crc kubenswrapper[4860]: I0121 21:44:40.915797 4860 scope.go:117] "RemoveContainer" containerID="07efbfc894fb85132cbd1b08de9be0ff3681facacf231d8f3ac8c3b20673d43e" Jan 21 21:44:40 crc kubenswrapper[4860]: I0121 21:44:40.946546 4860 scope.go:117] "RemoveContainer" containerID="43d3af72f152f610f81572888c295590f15938acae3ba317e91a4edaf351e6a9" Jan 21 21:44:40 crc kubenswrapper[4860]: I0121 21:44:40.970627 4860 scope.go:117] "RemoveContainer" containerID="829ce9e97c11a141da2881c1ea310217ba8a78327d05367061cad0944597a7e5" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.034250 4860 scope.go:117] "RemoveContainer" containerID="43d62ccc3fb59822eae900a066691991ca32c84d9f5eff660bc9ea9bcc3f3fd0" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.576888 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-82rm8_b6c5b0be-96f9-4141-a721-54ca98a89d93/nmstate-console-plugin/0.log" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.606157 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66jdw_4ccac8fa-d2c8-4110-9bd4-78a6340612f9/nmstate-handler/0.log" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.619813 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/nmstate-metrics/0.log" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.639108 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/kube-rbac-proxy/0.log" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.656777 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tpllw_5f9bf17c-9142-474a-8a94-7e8cc90702f0/nmstate-operator/0.log" Jan 21 21:44:41 crc kubenswrapper[4860]: I0121 21:44:41.675695 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wnc66_cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d/nmstate-webhook/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.598142 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q67c7_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f/prometheus-operator/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.616789 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_a1ce9223-1adf-48f8-a0bf-31ce28e5719f/prometheus-operator-admission-webhook/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.635554 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_b2f8b6ee-0b46-4492-ae99-aea050eed563/prometheus-operator-admission-webhook/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.689237 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t8zjn_db3166f1-3c99-4217-859b-24835c6f1f1e/operator/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.698572 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qj2fs_6a4226f5-36cd-49b1-bbf3-2d13973b45b5/observability-ui-dashboards/0.log" Jan 21 21:44:49 crc kubenswrapper[4860]: I0121 21:44:49.718373 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mv2g7_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a/perses-operator/0.log" Jan 21 21:44:57 crc kubenswrapper[4860]: I0121 21:44:57.703218 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:44:57 crc kubenswrapper[4860]: I0121 21:44:57.716238 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:44:57 crc kubenswrapper[4860]: I0121 21:44:57.747874 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.851417 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.863573 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.869164 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.878448 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.887807 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.899745 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.914925 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.928921 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.946455 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.976406 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:44:58 crc kubenswrapper[4860]: I0121 21:44:58.997547 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:44:59 crc kubenswrapper[4860]: I0121 21:44:59.253025 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:44:59 crc kubenswrapper[4860]: I0121 21:44:59.263657 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.167199 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh"] Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.169004 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.172749 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.172898 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.201271 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh"] Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.230473 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.230577 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95kd2\" (UniqueName: \"kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.230662 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.332850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95kd2\" (UniqueName: \"kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.332962 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.333104 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.334598 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.348354 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.358920 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95kd2\" (UniqueName: \"kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2\") pod \"collect-profiles-29483865-bjkwh\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:00 crc kubenswrapper[4860]: I0121 21:45:00.497470 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:01 crc kubenswrapper[4860]: I0121 21:45:01.183530 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh"] Jan 21 21:45:01 crc kubenswrapper[4860]: I0121 21:45:01.559974 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" event={"ID":"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa","Type":"ContainerStarted","Data":"e8e8956487fa5dadc5b5acebbe12750b3e33eb834897286a2ff7ea8153242f6e"} Jan 21 21:45:01 crc kubenswrapper[4860]: I0121 21:45:01.561021 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" event={"ID":"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa","Type":"ContainerStarted","Data":"fb49b51a2345def2a3fcc213b6561b1efb4825b21f5e5ad26511525381026926"} Jan 21 21:45:01 crc kubenswrapper[4860]: I0121 21:45:01.587276 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" podStartSLOduration=1.587234863 podStartE2EDuration="1.587234863s" podCreationTimestamp="2026-01-21 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 21:45:01.582110487 +0000 UTC m=+2193.804289227" watchObservedRunningTime="2026-01-21 21:45:01.587234863 +0000 UTC m=+2193.809413323" Jan 21 21:45:02 crc kubenswrapper[4860]: I0121 21:45:02.103832 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:45:02 crc kubenswrapper[4860]: I0121 21:45:02.104258 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:45:02 crc kubenswrapper[4860]: I0121 21:45:02.573325 4860 generic.go:334] "Generic (PLEG): container finished" podID="9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" containerID="e8e8956487fa5dadc5b5acebbe12750b3e33eb834897286a2ff7ea8153242f6e" exitCode=0 Jan 21 21:45:02 crc kubenswrapper[4860]: I0121 21:45:02.573387 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" event={"ID":"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa","Type":"ContainerDied","Data":"e8e8956487fa5dadc5b5acebbe12750b3e33eb834897286a2ff7ea8153242f6e"} Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.165450 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.193621 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume\") pod \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.194432 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95kd2\" (UniqueName: \"kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2\") pod \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.194492 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume\") pod \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\" (UID: \"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa\") " Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.195131 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" (UID: "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.204190 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" (UID: "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.213220 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2" (OuterVolumeSpecName: "kube-api-access-95kd2") pod "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" (UID: "9f83bc87-eb5a-4b0e-bdb0-103d65e488aa"). InnerVolumeSpecName "kube-api-access-95kd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.296909 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95kd2\" (UniqueName: \"kubernetes.io/projected/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-kube-api-access-95kd2\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.297000 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.297015 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f83bc87-eb5a-4b0e-bdb0-103d65e488aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.333010 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp"] Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.343192 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483820-nknlp"] Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.590919 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c3c027-6018-4182-bf8c-6309230608eb" path="/var/lib/kubelet/pods/70c3c027-6018-4182-bf8c-6309230608eb/volumes" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.594403 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" event={"ID":"9f83bc87-eb5a-4b0e-bdb0-103d65e488aa","Type":"ContainerDied","Data":"fb49b51a2345def2a3fcc213b6561b1efb4825b21f5e5ad26511525381026926"} Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.594439 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb49b51a2345def2a3fcc213b6561b1efb4825b21f5e5ad26511525381026926" Jan 21 21:45:04 crc kubenswrapper[4860]: I0121 21:45:04.594575 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483865-bjkwh" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.649209 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_d48b5afa-e436-4bbb-8131-2bea3323fe51/alertmanager/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.684175 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_d48b5afa-e436-4bbb-8131-2bea3323fe51/config-reloader/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.701396 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_d48b5afa-e436-4bbb-8131-2bea3323fe51/init-config-reloader/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.757902 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_05c42c02-4391-4c36-932c-dc0f3cdb80d7/ceilometer-central-agent/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.779367 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_05c42c02-4391-4c36-932c-dc0f3cdb80d7/ceilometer-notification-agent/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.789065 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_05c42c02-4391-4c36-932c-dc0f3cdb80d7/sg-core/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.802917 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_05c42c02-4391-4c36-932c-dc0f3cdb80d7/proxy-httpd/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.909108 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-655bfffb94-t7n44_75c67306-751f-46ae-8511-b77f1babd94c/keystone-api/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.922279 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-bootstrap-bpl6z_9a636e47-103a-4fb0-9cdd-567e47cae4c1/keystone-bootstrap/0.log" Jan 21 21:45:06 crc kubenswrapper[4860]: I0121 21:45:06.934400 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_28efe2fc-3f49-48b8-91f3-29b7a2d6879e/kube-state-metrics/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.329479 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_17062c8a-d76d-4565-9ec1-a0a2d83ad784/memcached/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.367860 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_d38c2bac-c957-454f-81e3-db76b749ff2d/galera/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.383409 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_d38c2bac-c957-454f-81e3-db76b749ff2d/mysql-bootstrap/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.394668 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_1696b722-1339-4636-99ca-32f9276ca7db/openstackclient/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.433316 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_2b229e16-dd0c-4c98-b734-dbe3c20639aa/prometheus/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.443694 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_2b229e16-dd0c-4c98-b734-dbe3c20639aa/config-reloader/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.461584 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_2b229e16-dd0c-4c98-b734-dbe3c20639aa/thanos-sidecar/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.473302 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_2b229e16-dd0c-4c98-b734-dbe3c20639aa/init-config-reloader/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.515255 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_f04c4d4c-f490-4a77-94fa-bea0fc5a43f3/rabbitmq/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.533831 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_f04c4d4c-f490-4a77-94fa-bea0fc5a43f3/setup-container/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.598237 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_d6da3cbd-8875-47bf-95ab-3734f22fe8a0/rabbitmq/0.log" Jan 21 21:45:19 crc kubenswrapper[4860]: I0121 21:45:19.604596 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_d6da3cbd-8875-47bf-95ab-3734f22fe8a0/setup-container/0.log" Jan 21 21:45:28 crc kubenswrapper[4860]: I0121 21:45:28.054098 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-bpl6z"] Jan 21 21:45:28 crc kubenswrapper[4860]: I0121 21:45:28.064158 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-bpl6z"] Jan 21 21:45:28 crc kubenswrapper[4860]: I0121 21:45:28.596629 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a636e47-103a-4fb0-9cdd-567e47cae4c1" path="/var/lib/kubelet/pods/9a636e47-103a-4fb0-9cdd-567e47cae4c1/volumes" Jan 21 21:45:29 crc kubenswrapper[4860]: I0121 21:45:29.859151 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj_64076d63-918c-4b94-9dae-a1ce4cd5b254/extract/0.log" Jan 21 21:45:29 crc kubenswrapper[4860]: I0121 21:45:29.883360 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj_64076d63-918c-4b94-9dae-a1ce4cd5b254/util/0.log" Jan 21 21:45:29 crc kubenswrapper[4860]: I0121 21:45:29.965060 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apktjj_64076d63-918c-4b94-9dae-a1ce4cd5b254/pull/0.log" Jan 21 21:45:29 crc kubenswrapper[4860]: I0121 21:45:29.982701 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd_5c161771-f442-4590-980e-3346fa015d48/extract/0.log" Jan 21 21:45:29 crc kubenswrapper[4860]: I0121 21:45:29.996121 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd_5c161771-f442-4590-980e-3346fa015d48/util/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.007112 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclcktd_5c161771-f442-4590-980e-3346fa015d48/pull/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.022343 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs_1a5257fe-4ae3-44ec-b045-524b3b95c81c/extract/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.031549 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs_1a5257fe-4ae3-44ec-b045-524b3b95c81c/util/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.041913 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ptdxs_1a5257fe-4ae3-44ec-b045-524b3b95c81c/pull/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.058551 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j_910ee5e4-1afe-4f34-a512-fc390f5ce35a/extract/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.074126 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j_910ee5e4-1afe-4f34-a512-fc390f5ce35a/util/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.093513 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08v5t9j_910ee5e4-1afe-4f34-a512-fc390f5ce35a/pull/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.537441 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bspqh_b3ceb80d-2539-41b8-a472-6dc1a6bdee30/registry-server/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.554540 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bspqh_b3ceb80d-2539-41b8-a472-6dc1a6bdee30/extract-utilities/0.log" Jan 21 21:45:30 crc kubenswrapper[4860]: I0121 21:45:30.565434 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bspqh_b3ceb80d-2539-41b8-a472-6dc1a6bdee30/extract-content/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.141949 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4nd6_7caba0ee-5c63-4f29-a763-d68278316c8c/registry-server/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.156346 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4nd6_7caba0ee-5c63-4f29-a763-d68278316c8c/extract-utilities/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.168282 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g4nd6_7caba0ee-5c63-4f29-a763-d68278316c8c/extract-content/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.192274 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2jl5x_dcae5e6e-baa7-4ab5-8c8c-7d9d235e2c87/marketplace-operator/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.315852 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wtc7j_e639198b-f128-4643-823a-f52afd19d43b/registry-server/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.322462 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wtc7j_e639198b-f128-4643-823a-f52afd19d43b/extract-utilities/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.341492 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wtc7j_e639198b-f128-4643-823a-f52afd19d43b/extract-content/0.log" Jan 21 21:45:31 crc kubenswrapper[4860]: I0121 21:45:31.997569 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mwvkt_859f5834-9350-48bf-9329-e20069b0613e/registry-server/0.log" Jan 21 21:45:32 crc kubenswrapper[4860]: I0121 21:45:32.004655 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mwvkt_859f5834-9350-48bf-9329-e20069b0613e/extract-utilities/0.log" Jan 21 21:45:32 crc kubenswrapper[4860]: I0121 21:45:32.015648 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mwvkt_859f5834-9350-48bf-9329-e20069b0613e/extract-content/0.log" Jan 21 21:45:32 crc kubenswrapper[4860]: I0121 21:45:32.103163 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:45:32 crc kubenswrapper[4860]: I0121 21:45:32.103269 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.265875 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q67c7_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f/prometheus-operator/0.log" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.280970 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_a1ce9223-1adf-48f8-a0bf-31ce28e5719f/prometheus-operator-admission-webhook/0.log" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.303120 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_b2f8b6ee-0b46-4492-ae99-aea050eed563/prometheus-operator-admission-webhook/0.log" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.341583 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t8zjn_db3166f1-3c99-4217-859b-24835c6f1f1e/operator/0.log" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.351065 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qj2fs_6a4226f5-36cd-49b1-bbf3-2d13973b45b5/observability-ui-dashboards/0.log" Jan 21 21:45:38 crc kubenswrapper[4860]: I0121 21:45:38.382650 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mv2g7_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a/perses-operator/0.log" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.193586 4860 scope.go:117] "RemoveContainer" containerID="df3df166931722411e4178d2d62ddca1b5a9e8f959c20f37923a984885d1918b" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.224194 4860 scope.go:117] "RemoveContainer" containerID="d8b4aef7e44b61bd6f15df66726f9bdaf3e361e9a43c1a27ef3598c38ac6eae5" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.250837 4860 scope.go:117] "RemoveContainer" containerID="07ae7ceeda909c5127abdb8f6d33484fb13c99e6904bdcb255286bbb928af1d1" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.272655 4860 scope.go:117] "RemoveContainer" containerID="98e085bab2ed495572fc71b1486e2489201c645134c462a158afc44af523e337" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.306376 4860 scope.go:117] "RemoveContainer" containerID="fae2d8fe59ebe4f61d7317868185c325a868f0f3981a870d55bd4f25b1b35519" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.366132 4860 scope.go:117] "RemoveContainer" containerID="542f04e6c1e45c105233c40a3161f538aaaf447a3c4f3334557393fa81a669d3" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.420795 4860 scope.go:117] "RemoveContainer" containerID="6398e7b23ddf4a00f8e28cd5e87ae34a0ceaa4c983ef9af321a2ac729545cd9d" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.531687 4860 scope.go:117] "RemoveContainer" containerID="4e7e11e153ee33a9dcb2bb16b32fd498cd31ca954d27ec122049110e182bc57d" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.568923 4860 scope.go:117] "RemoveContainer" containerID="65d660c1bbb467d539a9e30d9ad0e3a8746e6a20c96620d0964fcec9a0959484" Jan 21 21:45:41 crc kubenswrapper[4860]: I0121 21:45:41.630230 4860 scope.go:117] "RemoveContainer" containerID="1ad27740ed618831be7a49f0315efe721a1ca108458b6ad711631f3c16c448d4" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.428586 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:44 crc kubenswrapper[4860]: E0121 21:45:44.429526 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" containerName="collect-profiles" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.429553 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" containerName="collect-profiles" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.429857 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f83bc87-eb5a-4b0e-bdb0-103d65e488aa" containerName="collect-profiles" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.432492 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.449593 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.518924 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.519053 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4zf\" (UniqueName: \"kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.519216 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.621811 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.621953 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw4zf\" (UniqueName: \"kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.621987 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.622622 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.622627 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.652385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw4zf\" (UniqueName: \"kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf\") pod \"redhat-marketplace-vhg7c\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:44 crc kubenswrapper[4860]: I0121 21:45:44.773843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:45 crc kubenswrapper[4860]: I0121 21:45:45.316609 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:46 crc kubenswrapper[4860]: I0121 21:45:46.051749 4860 generic.go:334] "Generic (PLEG): container finished" podID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerID="990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe" exitCode=0 Jan 21 21:45:46 crc kubenswrapper[4860]: I0121 21:45:46.052254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerDied","Data":"990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe"} Jan 21 21:45:46 crc kubenswrapper[4860]: I0121 21:45:46.052288 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerStarted","Data":"7f6e4476bf1c0ea7a6361e9839911d48b9bc6c98601f37d03a48cb50c6a2130e"} Jan 21 21:45:47 crc kubenswrapper[4860]: I0121 21:45:47.062666 4860 generic.go:334] "Generic (PLEG): container finished" podID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerID="a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1" exitCode=0 Jan 21 21:45:47 crc kubenswrapper[4860]: I0121 21:45:47.062758 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerDied","Data":"a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1"} Jan 21 21:45:48 crc kubenswrapper[4860]: I0121 21:45:48.074218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerStarted","Data":"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff"} Jan 21 21:45:48 crc kubenswrapper[4860]: I0121 21:45:48.115874 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vhg7c" podStartSLOduration=2.702369784 podStartE2EDuration="4.115842716s" podCreationTimestamp="2026-01-21 21:45:44 +0000 UTC" firstStartedPulling="2026-01-21 21:45:46.054538901 +0000 UTC m=+2238.276717371" lastFinishedPulling="2026-01-21 21:45:47.468011833 +0000 UTC m=+2239.690190303" observedRunningTime="2026-01-21 21:45:48.104095678 +0000 UTC m=+2240.326274168" watchObservedRunningTime="2026-01-21 21:45:48.115842716 +0000 UTC m=+2240.338021196" Jan 21 21:45:54 crc kubenswrapper[4860]: I0121 21:45:54.774461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:54 crc kubenswrapper[4860]: I0121 21:45:54.775206 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:54 crc kubenswrapper[4860]: I0121 21:45:54.827617 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:55 crc kubenswrapper[4860]: I0121 21:45:55.198453 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:58 crc kubenswrapper[4860]: I0121 21:45:58.409004 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:58 crc kubenswrapper[4860]: I0121 21:45:58.410079 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vhg7c" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="registry-server" containerID="cri-o://831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff" gracePeriod=2 Jan 21 21:45:58 crc kubenswrapper[4860]: I0121 21:45:58.939592 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.083892 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities\") pod \"61da699e-3dc4-42ee-95af-4096b4f0ecca\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.084016 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content\") pod \"61da699e-3dc4-42ee-95af-4096b4f0ecca\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.084082 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw4zf\" (UniqueName: \"kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf\") pod \"61da699e-3dc4-42ee-95af-4096b4f0ecca\" (UID: \"61da699e-3dc4-42ee-95af-4096b4f0ecca\") " Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.085694 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities" (OuterVolumeSpecName: "utilities") pod "61da699e-3dc4-42ee-95af-4096b4f0ecca" (UID: "61da699e-3dc4-42ee-95af-4096b4f0ecca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.106264 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf" (OuterVolumeSpecName: "kube-api-access-sw4zf") pod "61da699e-3dc4-42ee-95af-4096b4f0ecca" (UID: "61da699e-3dc4-42ee-95af-4096b4f0ecca"). InnerVolumeSpecName "kube-api-access-sw4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.135326 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61da699e-3dc4-42ee-95af-4096b4f0ecca" (UID: "61da699e-3dc4-42ee-95af-4096b4f0ecca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.185913 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw4zf\" (UniqueName: \"kubernetes.io/projected/61da699e-3dc4-42ee-95af-4096b4f0ecca-kube-api-access-sw4zf\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.185964 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.185977 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61da699e-3dc4-42ee-95af-4096b4f0ecca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.186949 4860 generic.go:334] "Generic (PLEG): container finished" podID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerID="831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff" exitCode=0 Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.187006 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerDied","Data":"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff"} Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.187047 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhg7c" event={"ID":"61da699e-3dc4-42ee-95af-4096b4f0ecca","Type":"ContainerDied","Data":"7f6e4476bf1c0ea7a6361e9839911d48b9bc6c98601f37d03a48cb50c6a2130e"} Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.187066 4860 scope.go:117] "RemoveContainer" containerID="831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.187160 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhg7c" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.216575 4860 scope.go:117] "RemoveContainer" containerID="a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.226155 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.232624 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhg7c"] Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.241040 4860 scope.go:117] "RemoveContainer" containerID="990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.322644 4860 scope.go:117] "RemoveContainer" containerID="831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff" Jan 21 21:45:59 crc kubenswrapper[4860]: E0121 21:45:59.323180 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff\": container with ID starting with 831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff not found: ID does not exist" containerID="831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.323226 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff"} err="failed to get container status \"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff\": rpc error: code = NotFound desc = could not find container \"831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff\": container with ID starting with 831907af4588d52fce18fbe18acf691f639482052734b193e4d963ba5cdbc1ff not found: ID does not exist" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.323255 4860 scope.go:117] "RemoveContainer" containerID="a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1" Jan 21 21:45:59 crc kubenswrapper[4860]: E0121 21:45:59.323534 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1\": container with ID starting with a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1 not found: ID does not exist" containerID="a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.323555 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1"} err="failed to get container status \"a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1\": rpc error: code = NotFound desc = could not find container \"a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1\": container with ID starting with a6d178c0b3f97d79014efbe9af94ca221537288fd022b78c6f1c6876f8d518c1 not found: ID does not exist" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.323568 4860 scope.go:117] "RemoveContainer" containerID="990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe" Jan 21 21:45:59 crc kubenswrapper[4860]: E0121 21:45:59.324059 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe\": container with ID starting with 990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe not found: ID does not exist" containerID="990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe" Jan 21 21:45:59 crc kubenswrapper[4860]: I0121 21:45:59.324085 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe"} err="failed to get container status \"990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe\": rpc error: code = NotFound desc = could not find container \"990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe\": container with ID starting with 990a6343b47818bd90f10aa8bebf9755f02642c7e4daf208d59725f8620eaabe not found: ID does not exist" Jan 21 21:46:00 crc kubenswrapper[4860]: I0121 21:46:00.592927 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" path="/var/lib/kubelet/pods/61da699e-3dc4-42ee-95af-4096b4f0ecca/volumes" Jan 21 21:46:02 crc kubenswrapper[4860]: I0121 21:46:02.104119 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:46:02 crc kubenswrapper[4860]: I0121 21:46:02.104561 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:46:02 crc kubenswrapper[4860]: I0121 21:46:02.104628 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:46:02 crc kubenswrapper[4860]: I0121 21:46:02.105636 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:46:02 crc kubenswrapper[4860]: I0121 21:46:02.105722 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" gracePeriod=600 Jan 21 21:46:02 crc kubenswrapper[4860]: E0121 21:46:02.252165 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:03 crc kubenswrapper[4860]: I0121 21:46:03.247310 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" exitCode=0 Jan 21 21:46:03 crc kubenswrapper[4860]: I0121 21:46:03.247380 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b"} Jan 21 21:46:03 crc kubenswrapper[4860]: I0121 21:46:03.247445 4860 scope.go:117] "RemoveContainer" containerID="32a9f1332c2c5de681bf846ae634d50dfe1d50c28bd4d09220c269cccaea8975" Jan 21 21:46:03 crc kubenswrapper[4860]: I0121 21:46:03.248458 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:46:03 crc kubenswrapper[4860]: E0121 21:46:03.249201 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.028200 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:17 crc kubenswrapper[4860]: E0121 21:46:17.031292 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="extract-content" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.031324 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="extract-content" Jan 21 21:46:17 crc kubenswrapper[4860]: E0121 21:46:17.031356 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="extract-utilities" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.031365 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="extract-utilities" Jan 21 21:46:17 crc kubenswrapper[4860]: E0121 21:46:17.031394 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="registry-server" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.031402 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="registry-server" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.031630 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="61da699e-3dc4-42ee-95af-4096b4f0ecca" containerName="registry-server" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.033853 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.041379 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.093762 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.093827 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfv62\" (UniqueName: \"kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.093899 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.199924 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.200044 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.200077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfv62\" (UniqueName: \"kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.205475 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.206135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.239952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfv62\" (UniqueName: \"kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62\") pod \"certified-operators-rcfkh\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.407679 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:17 crc kubenswrapper[4860]: I0121 21:46:17.581442 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:46:17 crc kubenswrapper[4860]: E0121 21:46:17.582355 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:18 crc kubenswrapper[4860]: I0121 21:46:17.999980 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:18 crc kubenswrapper[4860]: I0121 21:46:18.410256 4860 generic.go:334] "Generic (PLEG): container finished" podID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerID="6cd2e9d000eba4b0570578abcf3587e21d548d799b542f75d136064d09a5e3a2" exitCode=0 Jan 21 21:46:18 crc kubenswrapper[4860]: I0121 21:46:18.410322 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerDied","Data":"6cd2e9d000eba4b0570578abcf3587e21d548d799b542f75d136064d09a5e3a2"} Jan 21 21:46:18 crc kubenswrapper[4860]: I0121 21:46:18.410357 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerStarted","Data":"324e970ea0fc50529204f4c36cc5e6233b358c534f54020a7bd45339ac1ec647"} Jan 21 21:46:20 crc kubenswrapper[4860]: I0121 21:46:20.435027 4860 generic.go:334] "Generic (PLEG): container finished" podID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerID="a9eccd1cf1c00424126542dd97b5861db9dac928a0803ba10a5d1d5862eade78" exitCode=0 Jan 21 21:46:20 crc kubenswrapper[4860]: I0121 21:46:20.435094 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerDied","Data":"a9eccd1cf1c00424126542dd97b5861db9dac928a0803ba10a5d1d5862eade78"} Jan 21 21:46:21 crc kubenswrapper[4860]: I0121 21:46:21.448310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerStarted","Data":"c452f15c2af21c646aa0391252de6856c6fbeefafb0c58d33fbab51c5dc44650"} Jan 21 21:46:21 crc kubenswrapper[4860]: I0121 21:46:21.477497 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcfkh" podStartSLOduration=2.009387346 podStartE2EDuration="4.47746357s" podCreationTimestamp="2026-01-21 21:46:17 +0000 UTC" firstStartedPulling="2026-01-21 21:46:18.415731811 +0000 UTC m=+2270.637910281" lastFinishedPulling="2026-01-21 21:46:20.883808035 +0000 UTC m=+2273.105986505" observedRunningTime="2026-01-21 21:46:21.473115717 +0000 UTC m=+2273.695294187" watchObservedRunningTime="2026-01-21 21:46:21.47746357 +0000 UTC m=+2273.699642050" Jan 21 21:46:27 crc kubenswrapper[4860]: I0121 21:46:27.412124 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:27 crc kubenswrapper[4860]: I0121 21:46:27.412974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:27 crc kubenswrapper[4860]: I0121 21:46:27.725106 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:27 crc kubenswrapper[4860]: I0121 21:46:27.785032 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:28 crc kubenswrapper[4860]: I0121 21:46:28.585425 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:46:28 crc kubenswrapper[4860]: E0121 21:46:28.585728 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:29 crc kubenswrapper[4860]: I0121 21:46:29.407151 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:29 crc kubenswrapper[4860]: I0121 21:46:29.567536 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcfkh" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="registry-server" containerID="cri-o://c452f15c2af21c646aa0391252de6856c6fbeefafb0c58d33fbab51c5dc44650" gracePeriod=2 Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.591296 4860 generic.go:334] "Generic (PLEG): container finished" podID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerID="c452f15c2af21c646aa0391252de6856c6fbeefafb0c58d33fbab51c5dc44650" exitCode=0 Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.599373 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerDied","Data":"c452f15c2af21c646aa0391252de6856c6fbeefafb0c58d33fbab51c5dc44650"} Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.599444 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcfkh" event={"ID":"75196e6a-849d-4da9-a2bd-9ada3956f948","Type":"ContainerDied","Data":"324e970ea0fc50529204f4c36cc5e6233b358c534f54020a7bd45339ac1ec647"} Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.599459 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324e970ea0fc50529204f4c36cc5e6233b358c534f54020a7bd45339ac1ec647" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.640626 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.806641 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfv62\" (UniqueName: \"kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62\") pod \"75196e6a-849d-4da9-a2bd-9ada3956f948\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.806779 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities\") pod \"75196e6a-849d-4da9-a2bd-9ada3956f948\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.806858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content\") pod \"75196e6a-849d-4da9-a2bd-9ada3956f948\" (UID: \"75196e6a-849d-4da9-a2bd-9ada3956f948\") " Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.807843 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities" (OuterVolumeSpecName: "utilities") pod "75196e6a-849d-4da9-a2bd-9ada3956f948" (UID: "75196e6a-849d-4da9-a2bd-9ada3956f948"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.813806 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62" (OuterVolumeSpecName: "kube-api-access-mfv62") pod "75196e6a-849d-4da9-a2bd-9ada3956f948" (UID: "75196e6a-849d-4da9-a2bd-9ada3956f948"). InnerVolumeSpecName "kube-api-access-mfv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.869611 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75196e6a-849d-4da9-a2bd-9ada3956f948" (UID: "75196e6a-849d-4da9-a2bd-9ada3956f948"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.908900 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.908980 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75196e6a-849d-4da9-a2bd-9ada3956f948-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:46:30 crc kubenswrapper[4860]: I0121 21:46:30.908995 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfv62\" (UniqueName: \"kubernetes.io/projected/75196e6a-849d-4da9-a2bd-9ada3956f948-kube-api-access-mfv62\") on node \"crc\" DevicePath \"\"" Jan 21 21:46:31 crc kubenswrapper[4860]: I0121 21:46:31.600198 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcfkh" Jan 21 21:46:31 crc kubenswrapper[4860]: I0121 21:46:31.634509 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:31 crc kubenswrapper[4860]: I0121 21:46:31.647528 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcfkh"] Jan 21 21:46:32 crc kubenswrapper[4860]: I0121 21:46:32.591540 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" path="/var/lib/kubelet/pods/75196e6a-849d-4da9-a2bd-9ada3956f948/volumes" Jan 21 21:46:41 crc kubenswrapper[4860]: I0121 21:46:41.825885 4860 scope.go:117] "RemoveContainer" containerID="07696da3ca0ec4b70ee33d819385b06267103c2cf49f09c5432b0fa427e3ede0" Jan 21 21:46:41 crc kubenswrapper[4860]: I0121 21:46:41.870730 4860 scope.go:117] "RemoveContainer" containerID="ced8aa5acf2426673c848b571b32fb429e700f6cae49453fafa37f84e8c87c94" Jan 21 21:46:41 crc kubenswrapper[4860]: I0121 21:46:41.920893 4860 scope.go:117] "RemoveContainer" containerID="75f8162f49ca4a6c3802a1e7b0f922ba950acc41d2202fe973322248b4ff08c7" Jan 21 21:46:41 crc kubenswrapper[4860]: I0121 21:46:41.984015 4860 scope.go:117] "RemoveContainer" containerID="92cb3f8317d559d8e2324a7668f63d93ea006dcd5d5febc2332fdaa33781c07c" Jan 21 21:46:42 crc kubenswrapper[4860]: I0121 21:46:42.580022 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:46:42 crc kubenswrapper[4860]: E0121 21:46:42.580433 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:54 crc kubenswrapper[4860]: I0121 21:46:54.579155 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:46:54 crc kubenswrapper[4860]: E0121 21:46:54.580109 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.809310 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q67c7_a8923e74-d8ad-4a90-ba9f-f26f7c92ef4f/prometheus-operator/0.log" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.822733 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-ljxv6_a1ce9223-1adf-48f8-a0bf-31ce28e5719f/prometheus-operator-admission-webhook/0.log" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.836145 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-855647d7cb-tvpsv_b2f8b6ee-0b46-4492-ae99-aea050eed563/prometheus-operator-admission-webhook/0.log" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.880998 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t8zjn_db3166f1-3c99-4217-859b-24835c6f1f1e/operator/0.log" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.889624 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-qj2fs_6a4226f5-36cd-49b1-bbf3-2d13973b45b5/observability-ui-dashboards/0.log" Jan 21 21:46:58 crc kubenswrapper[4860]: I0121 21:46:58.905803 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mv2g7_c5c4c6e9-c3e2-4b43-94a2-1918304ff52a/perses-operator/0.log" Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.281050 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.296896 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.311014 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.859375 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerID="71116cd99910e33548b80399020d100ba2719488e4440d2a19738870d1d6cb90" exitCode=0 Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.859484 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5k4pz/must-gather-t8b54" event={"ID":"f2c12be4-8e69-45c0-88a0-e2148aae2e90","Type":"ContainerDied","Data":"71116cd99910e33548b80399020d100ba2719488e4440d2a19738870d1d6cb90"} Jan 21 21:46:59 crc kubenswrapper[4860]: I0121 21:46:59.860962 4860 scope.go:117] "RemoveContainer" containerID="71116cd99910e33548b80399020d100ba2719488e4440d2a19738870d1d6cb90" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.391363 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.394053 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/controller/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.400654 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xd2ml_c9335377-613f-4d57-8ad1-48dc561aaa28/kube-rbac-proxy/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.400956 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.409136 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54_f2c12be4-8e69-45c0-88a0-e2148aae2e90/gather/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.422751 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.426775 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/controller/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.445005 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.526461 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.543333 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.559439 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.565920 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.577386 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.597372 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.619581 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:47:00 crc kubenswrapper[4860]: I0121 21:47:00.634290 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.046078 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.217199 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.431091 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.468822 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.538647 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.551809 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.580326 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.609581 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:47:01 crc kubenswrapper[4860]: I0121 21:47:01.630054 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.679699 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.694462 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.697114 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.708313 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/reloader/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.711323 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.714850 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/frr-metrics/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.726156 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.731238 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.735800 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/kube-rbac-proxy-frr/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.746474 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-frr-files/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.750169 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.756444 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-reloader/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.764241 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6m2js_970afa92-8bd5-4351-80dd-ca87ad067409/cp-metrics/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.775324 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.781232 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6vpls_e4bfa648-7d9f-488c-9b1b-ffd3cb2d997e/frr-k8s-webhook-server/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.827070 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5844d47cc5-cxs88_c8584c36-7092-4bd3-b92e-5a3e8c16ec63/manager/0.log" Jan 21 21:47:02 crc kubenswrapper[4860]: I0121 21:47:02.841143 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ccfb7bd9d-w49p7_f6d67ae0-be03-465f-bb51-ace581cc0bb8/webhook-server/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.229761 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.230147 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/speaker/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.239580 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5hvn2_65134009-4244-4384-91b7-057584cd6586/kube-rbac-proxy/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.242041 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.682978 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:47:03 crc kubenswrapper[4860]: I0121 21:47:03.692441 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:47:04 crc kubenswrapper[4860]: I0121 21:47:04.442739 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-wzmgt_20199873-120c-483b-b74e-6d501fdb151a/cert-manager-controller/0.log" Jan 21 21:47:04 crc kubenswrapper[4860]: I0121 21:47:04.462623 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-m5v7j_fa444955-5bc4-4188-9b3e-80b24e9e6cb4/cert-manager-cainjector/0.log" Jan 21 21:47:04 crc kubenswrapper[4860]: I0121 21:47:04.484579 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zvf7j_5889d6e2-f3dc-4189-a782-cf0ad4db5e55/cert-manager-webhook/0.log" Jan 21 21:47:05 crc kubenswrapper[4860]: I0121 21:47:05.978827 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4x452_70aea1b0-13b2-43ee-a77d-10c3143e4a95/control-plane-machine-set-operator/0.log" Jan 21 21:47:05 crc kubenswrapper[4860]: I0121 21:47:05.998463 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/kube-rbac-proxy/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.017883 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jx5dt_40070d0f-4d18-4d7c-a85a-cd2f904ea27a/machine-api-operator/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.463729 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-82rm8_b6c5b0be-96f9-4141-a721-54ca98a89d93/nmstate-console-plugin/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.487015 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66jdw_4ccac8fa-d2c8-4110-9bd4-78a6340612f9/nmstate-handler/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.510590 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/nmstate-metrics/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.526490 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ktn72_8364952a-bcf3-49ae-b357-0521e9d6e04e/kube-rbac-proxy/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.558029 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tpllw_5f9bf17c-9142-474a-8a94-7e8cc90702f0/nmstate-operator/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.577525 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wnc66_cd4a9e40-3ac7-4645-a3a5-a5a42890cb5d/nmstate-webhook/0.log" Jan 21 21:47:06 crc kubenswrapper[4860]: I0121 21:47:06.579046 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:47:06 crc kubenswrapper[4860]: E0121 21:47:06.579444 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.420199 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/extract/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.428566 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/util/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.437765 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_621f1ac024d2d66c655d1ff3de84c0bc9742364141c002e777be118f416d278_4882d6a4-5a1e-446f-aba5-22af497454ef/pull/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.454076 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sslzp_404e97a3-3fcd-4ec0-a67d-53ed93d62685/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.498055 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c95ps_2dd3e1b9-abea-4287-87e0-cb3f60423d54/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.516155 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-vrvmq_1a209a81-fb7b-4621-84db-567f96093a6b/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.549201 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/extract/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.561484 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/util/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.578491 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ff42995a8c5005342a031bf79a597bdc660a1c81752c219d0c3e8d0ae1wn97s_4d46ff7a-85e0-461a-aea5-d5b8f2d39634/pull/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.595985 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-p7jg2_33a0c624-f40b-4d45-9b00-39c36c15d6bb/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.636607 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b29tb_f7cd8d4f-753e-4b6f-a69a-2ce4c8b2ee85/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.661687 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pvq7t_084bba8e-36e4-4e04-8109-4b0f6f97d37f/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.829504 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-8hx7p_3d5ae9ad-1309-4221-b99a-86b9e5aa075b/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.842873 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-ldzzc_d107aacb-3e12-43fd-a68c-2a6b2c10295c/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.966153 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4vpgf_96503e13-4e73-4048-be57-01a726c114da/manager/0.log" Jan 21 21:47:07 crc kubenswrapper[4860]: I0121 21:47:07.983126 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-w6jg6_519cbf74-c4d7-425b-837d-afbb85f3ecc4/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.018178 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w857v_4f7ce297-eef0-4067-bd7b-1bb64ced0239/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.032883 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-8mv6c_626c3db6-f60f-472b-b0e5-0834b5bded25/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.053078 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-nn25n_69b9fdd7-ae64-4756-ad1c-27de6ec5ffb5/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.065345 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q8wm8_adcb4b85-f016-45ed-8029-7191ade5683a/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.083640 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854787gn_95ed04ab-1ca5-4c7d-bd51-3a884a0c1d96/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.708911 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c98596b-6jfrl_8dad99b9-0de7-450d-8c58-96590671dd98/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.741913 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bhnr9_f4f99b18-596f-4e28-8941-0b83f1cf57e5/registry-server/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.758715 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-nbvmh_a5eceab3-1171-484d-91da-990d323440d4/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.789413 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-m892h_9731b174-d203-4170-b49f-0de94000f154/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.823117 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mpknx_93010989-aa15-487c-b470-919932329af1/operator/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.846416 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pv9x9_b4019683-a628-42e6-91ba-1cb0505326e3/manager/0.log" Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.963828 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54"] Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.963951 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5k4pz/must-gather-t8b54"] Jan 21 21:47:08 crc kubenswrapper[4860]: I0121 21:47:08.964243 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5k4pz/must-gather-t8b54" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="copy" containerID="cri-o://f2d8b390669fccd91ae8f536452c388a7318d6b54bdf7b11e46639a43dde4642" gracePeriod=2 Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.212503 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bk9sb_61a273d5-b25c-4729-8736-9965ac435468/manager/0.log" Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.228269 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-tldvn_3f367ab5-2df3-466b-8ec4-7c4f23dcc578/manager/0.log" Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.657644 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-844f9d4c74-gwp5p_84bd609c-f081-46a8-80ba-9c251389699e/manager/0.log" Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.676626 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-8w757_bdbebf1c-8bd6-4223-939a-f088d773cdc5/registry-server/0.log" Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.982325 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54_f2c12be4-8e69-45c0-88a0-e2148aae2e90/copy/0.log" Jan 21 21:47:09 crc kubenswrapper[4860]: I0121 21:47:09.986585 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerID="f2d8b390669fccd91ae8f536452c388a7318d6b54bdf7b11e46639a43dde4642" exitCode=143 Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.076195 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54_f2c12be4-8e69-45c0-88a0-e2148aae2e90/copy/0.log" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.076826 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.130821 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output\") pod \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.130973 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq\") pod \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\" (UID: \"f2c12be4-8e69-45c0-88a0-e2148aae2e90\") " Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.141901 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq" (OuterVolumeSpecName: "kube-api-access-hbvnq") pod "f2c12be4-8e69-45c0-88a0-e2148aae2e90" (UID: "f2c12be4-8e69-45c0-88a0-e2148aae2e90"). InnerVolumeSpecName "kube-api-access-hbvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.233670 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvnq\" (UniqueName: \"kubernetes.io/projected/f2c12be4-8e69-45c0-88a0-e2148aae2e90-kube-api-access-hbvnq\") on node \"crc\" DevicePath \"\"" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.277466 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f2c12be4-8e69-45c0-88a0-e2148aae2e90" (UID: "f2c12be4-8e69-45c0-88a0-e2148aae2e90"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.338090 4860 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2c12be4-8e69-45c0-88a0-e2148aae2e90-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.590113 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" path="/var/lib/kubelet/pods/f2c12be4-8e69-45c0-88a0-e2148aae2e90/volumes" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.997872 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5k4pz_must-gather-t8b54_f2c12be4-8e69-45c0-88a0-e2148aae2e90/copy/0.log" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.998754 4860 scope.go:117] "RemoveContainer" containerID="f2d8b390669fccd91ae8f536452c388a7318d6b54bdf7b11e46639a43dde4642" Jan 21 21:47:10 crc kubenswrapper[4860]: I0121 21:47:10.999024 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5k4pz/must-gather-t8b54" Jan 21 21:47:11 crc kubenswrapper[4860]: I0121 21:47:11.028378 4860 scope.go:117] "RemoveContainer" containerID="71116cd99910e33548b80399020d100ba2719488e4440d2a19738870d1d6cb90" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.558415 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/kube-multus-additional-cni-plugins/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.570877 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/egress-router-binary-copy/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.580273 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/cni-plugins/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.588723 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/bond-cni-plugin/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.613281 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/routeoverride-cni/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.622888 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/whereabouts-cni-bincopy/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.632007 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-77hw7_9a9e9fa6-0fb9-47bf-a3a6-ab04dc59ce04/whereabouts-cni/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.648605 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lcbjc_2e29e04b-89f7-4d77-8e17-0355493a1d9f/multus-admission-controller/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.655364 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lcbjc_2e29e04b-89f7-4d77-8e17-0355493a1d9f/kube-rbac-proxy/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.718637 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/2.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.753524 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s67xh_e2a7ca69-9cb5-41b5-9213-72165a9fc8e1/kube-multus/3.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.791046 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rrwcr_60ae05da-3403-4a2f-92f4-2ffa574a65a8/network-metrics-daemon/0.log" Jan 21 21:47:12 crc kubenswrapper[4860]: I0121 21:47:12.797695 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rrwcr_60ae05da-3403-4a2f-92f4-2ffa574a65a8/kube-rbac-proxy/0.log" Jan 21 21:47:19 crc kubenswrapper[4860]: I0121 21:47:19.579259 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:47:19 crc kubenswrapper[4860]: E0121 21:47:19.580422 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:47:33 crc kubenswrapper[4860]: I0121 21:47:33.710212 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:47:33 crc kubenswrapper[4860]: E0121 21:47:33.711234 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.644887 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:41 crc kubenswrapper[4860]: E0121 21:47:41.646326 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="extract-content" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646342 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="extract-content" Jan 21 21:47:41 crc kubenswrapper[4860]: E0121 21:47:41.646371 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="copy" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646379 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="copy" Jan 21 21:47:41 crc kubenswrapper[4860]: E0121 21:47:41.646388 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="gather" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646395 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="gather" Jan 21 21:47:41 crc kubenswrapper[4860]: E0121 21:47:41.646404 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="registry-server" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646410 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="registry-server" Jan 21 21:47:41 crc kubenswrapper[4860]: E0121 21:47:41.646420 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="extract-utilities" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646427 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="extract-utilities" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646605 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="copy" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646638 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c12be4-8e69-45c0-88a0-e2148aae2e90" containerName="gather" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.646649 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="75196e6a-849d-4da9-a2bd-9ada3956f948" containerName="registry-server" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.647997 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.790673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.877340 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.877476 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvcz\" (UniqueName: \"kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.877514 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.979646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvcz\" (UniqueName: \"kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.979747 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.979842 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.981200 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:41 crc kubenswrapper[4860]: I0121 21:47:41.981246 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.003296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvcz\" (UniqueName: \"kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz\") pod \"community-operators-mbplj\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.099656 4860 scope.go:117] "RemoveContainer" containerID="ed06255e5e46272974158e8054e08c71fb43dedec44434e07d0bd4ccb42f326f" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.127249 4860 scope.go:117] "RemoveContainer" containerID="7885b1023e6930fa06a502a789002b8c487f29918594dc2cb7bdf8c47420552b" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.187191 4860 scope.go:117] "RemoveContainer" containerID="31bb0aa3bad73f81e96c3d82f53b9a985e67b1c9349560f1df27b350bc4dab5d" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.211084 4860 scope.go:117] "RemoveContainer" containerID="f9e3dc88bc938e9cfa07715dd5eb4da9ea6c41aee21f58e0169f9413ef563d22" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.267389 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:42 crc kubenswrapper[4860]: I0121 21:47:42.635256 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:43 crc kubenswrapper[4860]: I0121 21:47:43.324193 4860 generic.go:334] "Generic (PLEG): container finished" podID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerID="40c89ee44a4f6de339961a5d90df6d83e36a3737bc7c06226183c1cf6f1ca591" exitCode=0 Jan 21 21:47:43 crc kubenswrapper[4860]: I0121 21:47:43.324412 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerDied","Data":"40c89ee44a4f6de339961a5d90df6d83e36a3737bc7c06226183c1cf6f1ca591"} Jan 21 21:47:43 crc kubenswrapper[4860]: I0121 21:47:43.324902 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerStarted","Data":"e73bfe2eeb2f4d20c5a4fd46270fcb3bed1c9a68d354b5f2ed4eecd30c97f67d"} Jan 21 21:47:43 crc kubenswrapper[4860]: I0121 21:47:43.329268 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:47:44 crc kubenswrapper[4860]: I0121 21:47:44.335496 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerStarted","Data":"314cecd2b8624f66638875a8fb11fd210b31d872159f8417301d24d4970148f2"} Jan 21 21:47:45 crc kubenswrapper[4860]: I0121 21:47:45.349803 4860 generic.go:334] "Generic (PLEG): container finished" podID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerID="314cecd2b8624f66638875a8fb11fd210b31d872159f8417301d24d4970148f2" exitCode=0 Jan 21 21:47:45 crc kubenswrapper[4860]: I0121 21:47:45.349895 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerDied","Data":"314cecd2b8624f66638875a8fb11fd210b31d872159f8417301d24d4970148f2"} Jan 21 21:47:46 crc kubenswrapper[4860]: I0121 21:47:46.362311 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerStarted","Data":"d2f16d3d89b9bbbfed5f940f9c1ca7e0c16953a53bc3290cd13d2efedb1b4c9f"} Jan 21 21:47:46 crc kubenswrapper[4860]: I0121 21:47:46.398226 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mbplj" podStartSLOduration=2.977544788 podStartE2EDuration="5.398192343s" podCreationTimestamp="2026-01-21 21:47:41 +0000 UTC" firstStartedPulling="2026-01-21 21:47:43.328783102 +0000 UTC m=+2355.550961582" lastFinishedPulling="2026-01-21 21:47:45.749430667 +0000 UTC m=+2357.971609137" observedRunningTime="2026-01-21 21:47:46.396901184 +0000 UTC m=+2358.619079654" watchObservedRunningTime="2026-01-21 21:47:46.398192343 +0000 UTC m=+2358.620370813" Jan 21 21:47:47 crc kubenswrapper[4860]: I0121 21:47:47.579316 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:47:47 crc kubenswrapper[4860]: E0121 21:47:47.579599 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:47:52 crc kubenswrapper[4860]: I0121 21:47:52.267694 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:52 crc kubenswrapper[4860]: I0121 21:47:52.271918 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:52 crc kubenswrapper[4860]: I0121 21:47:52.327654 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:52 crc kubenswrapper[4860]: I0121 21:47:52.481547 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:53 crc kubenswrapper[4860]: I0121 21:47:53.012385 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:54 crc kubenswrapper[4860]: I0121 21:47:54.438645 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mbplj" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="registry-server" containerID="cri-o://d2f16d3d89b9bbbfed5f940f9c1ca7e0c16953a53bc3290cd13d2efedb1b4c9f" gracePeriod=2 Jan 21 21:47:55 crc kubenswrapper[4860]: I0121 21:47:55.451979 4860 generic.go:334] "Generic (PLEG): container finished" podID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerID="d2f16d3d89b9bbbfed5f940f9c1ca7e0c16953a53bc3290cd13d2efedb1b4c9f" exitCode=0 Jan 21 21:47:55 crc kubenswrapper[4860]: I0121 21:47:55.452073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerDied","Data":"d2f16d3d89b9bbbfed5f940f9c1ca7e0c16953a53bc3290cd13d2efedb1b4c9f"} Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.196018 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.210914 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbvcz\" (UniqueName: \"kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz\") pod \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.211008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities\") pod \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.211063 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content\") pod \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\" (UID: \"fe1a2c0d-9720-46af-9d63-5670d5c367c9\") " Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.213529 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities" (OuterVolumeSpecName: "utilities") pod "fe1a2c0d-9720-46af-9d63-5670d5c367c9" (UID: "fe1a2c0d-9720-46af-9d63-5670d5c367c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.225164 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz" (OuterVolumeSpecName: "kube-api-access-gbvcz") pod "fe1a2c0d-9720-46af-9d63-5670d5c367c9" (UID: "fe1a2c0d-9720-46af-9d63-5670d5c367c9"). InnerVolumeSpecName "kube-api-access-gbvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.292153 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe1a2c0d-9720-46af-9d63-5670d5c367c9" (UID: "fe1a2c0d-9720-46af-9d63-5670d5c367c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.312996 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbvcz\" (UniqueName: \"kubernetes.io/projected/fe1a2c0d-9720-46af-9d63-5670d5c367c9-kube-api-access-gbvcz\") on node \"crc\" DevicePath \"\"" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.313042 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.313054 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1a2c0d-9720-46af-9d63-5670d5c367c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.465327 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbplj" event={"ID":"fe1a2c0d-9720-46af-9d63-5670d5c367c9","Type":"ContainerDied","Data":"e73bfe2eeb2f4d20c5a4fd46270fcb3bed1c9a68d354b5f2ed4eecd30c97f67d"} Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.465441 4860 scope.go:117] "RemoveContainer" containerID="d2f16d3d89b9bbbfed5f940f9c1ca7e0c16953a53bc3290cd13d2efedb1b4c9f" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.465617 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbplj" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.506697 4860 scope.go:117] "RemoveContainer" containerID="314cecd2b8624f66638875a8fb11fd210b31d872159f8417301d24d4970148f2" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.508138 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.515865 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mbplj"] Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.530978 4860 scope.go:117] "RemoveContainer" containerID="40c89ee44a4f6de339961a5d90df6d83e36a3737bc7c06226183c1cf6f1ca591" Jan 21 21:47:56 crc kubenswrapper[4860]: I0121 21:47:56.594073 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" path="/var/lib/kubelet/pods/fe1a2c0d-9720-46af-9d63-5670d5c367c9/volumes" Jan 21 21:47:58 crc kubenswrapper[4860]: I0121 21:47:58.585298 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:47:58 crc kubenswrapper[4860]: E0121 21:47:58.585648 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:48:10 crc kubenswrapper[4860]: I0121 21:48:10.579335 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:48:10 crc kubenswrapper[4860]: E0121 21:48:10.580260 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:48:24 crc kubenswrapper[4860]: I0121 21:48:24.579650 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:48:24 crc kubenswrapper[4860]: E0121 21:48:24.580675 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:48:37 crc kubenswrapper[4860]: I0121 21:48:37.584160 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:48:37 crc kubenswrapper[4860]: E0121 21:48:37.585291 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:48:42 crc kubenswrapper[4860]: I0121 21:48:42.409551 4860 scope.go:117] "RemoveContainer" containerID="7c4e9e05f9bb5d753b1aea9b38dab3a528761d1a0e7dc15ff586919ecea179e0" Jan 21 21:48:42 crc kubenswrapper[4860]: I0121 21:48:42.448502 4860 scope.go:117] "RemoveContainer" containerID="cc07409fd7b182175d87b820415cc8842fe9182701451a9c8b9cbd833c907cd9" Jan 21 21:48:42 crc kubenswrapper[4860]: I0121 21:48:42.547928 4860 scope.go:117] "RemoveContainer" containerID="3f108f1bdce218e6dc5fe85fe651f929d8a54c80adfb2e7342c9eccca63769ff" Jan 21 21:48:42 crc kubenswrapper[4860]: I0121 21:48:42.575732 4860 scope.go:117] "RemoveContainer" containerID="eb0eff4df3b1c006e76db7b7a5a3b7386a336010e04cc9f8f40eef5485c249aa" Jan 21 21:48:49 crc kubenswrapper[4860]: I0121 21:48:49.578520 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:48:49 crc kubenswrapper[4860]: E0121 21:48:49.579445 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:49:01 crc kubenswrapper[4860]: I0121 21:49:01.580168 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:49:01 crc kubenswrapper[4860]: E0121 21:49:01.581721 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:49:13 crc kubenswrapper[4860]: I0121 21:49:13.581106 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:49:13 crc kubenswrapper[4860]: E0121 21:49:13.582472 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:49:27 crc kubenswrapper[4860]: I0121 21:49:27.578684 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:49:27 crc kubenswrapper[4860]: E0121 21:49:27.579760 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:49:41 crc kubenswrapper[4860]: I0121 21:49:41.579143 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:49:41 crc kubenswrapper[4860]: E0121 21:49:41.580497 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:49:42 crc kubenswrapper[4860]: I0121 21:49:42.743622 4860 scope.go:117] "RemoveContainer" containerID="86af1660cb582df6437eb6acf1132005873c2cdf63cc2daaa581babefae63e04" Jan 21 21:49:42 crc kubenswrapper[4860]: I0121 21:49:42.786068 4860 scope.go:117] "RemoveContainer" containerID="76f40618846fffbff60e7351aee8f861d3dbbf86a88932e442822bfd25124eb9" Jan 21 21:49:52 crc kubenswrapper[4860]: I0121 21:49:52.582252 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:49:52 crc kubenswrapper[4860]: E0121 21:49:52.583310 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:50:05 crc kubenswrapper[4860]: I0121 21:50:05.579153 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:50:05 crc kubenswrapper[4860]: E0121 21:50:05.580443 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:50:17 crc kubenswrapper[4860]: I0121 21:50:17.579153 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:50:17 crc kubenswrapper[4860]: E0121 21:50:17.580412 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:50:31 crc kubenswrapper[4860]: I0121 21:50:31.579563 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:50:31 crc kubenswrapper[4860]: E0121 21:50:31.580464 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:50:44 crc kubenswrapper[4860]: I0121 21:50:44.579799 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:50:44 crc kubenswrapper[4860]: E0121 21:50:44.582744 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:50:55 crc kubenswrapper[4860]: I0121 21:50:55.578905 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:50:55 crc kubenswrapper[4860]: E0121 21:50:55.579803 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:51:07 crc kubenswrapper[4860]: I0121 21:51:07.579755 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:51:08 crc kubenswrapper[4860]: I0121 21:51:08.655293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46"} Jan 21 21:52:42 crc kubenswrapper[4860]: I0121 21:52:42.952484 4860 scope.go:117] "RemoveContainer" containerID="a9eccd1cf1c00424126542dd97b5861db9dac928a0803ba10a5d1d5862eade78" Jan 21 21:52:43 crc kubenswrapper[4860]: I0121 21:52:43.012381 4860 scope.go:117] "RemoveContainer" containerID="c452f15c2af21c646aa0391252de6856c6fbeefafb0c58d33fbab51c5dc44650" Jan 21 21:52:43 crc kubenswrapper[4860]: I0121 21:52:43.046521 4860 scope.go:117] "RemoveContainer" containerID="6cd2e9d000eba4b0570578abcf3587e21d548d799b542f75d136064d09a5e3a2" Jan 21 21:53:32 crc kubenswrapper[4860]: I0121 21:53:32.103179 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:53:32 crc kubenswrapper[4860]: I0121 21:53:32.105470 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:54:02 crc kubenswrapper[4860]: I0121 21:54:02.103757 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:54:02 crc kubenswrapper[4860]: I0121 21:54:02.111852 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.027507 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:20 crc kubenswrapper[4860]: E0121 21:54:20.028682 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="registry-server" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.028701 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="registry-server" Jan 21 21:54:20 crc kubenswrapper[4860]: E0121 21:54:20.028712 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="extract-utilities" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.028720 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="extract-utilities" Jan 21 21:54:20 crc kubenswrapper[4860]: E0121 21:54:20.028753 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="extract-content" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.028761 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="extract-content" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.028958 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1a2c0d-9720-46af-9d63-5670d5c367c9" containerName="registry-server" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.030281 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.057673 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.144851 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.145314 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.145554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm2rx\" (UniqueName: \"kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.247792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm2rx\" (UniqueName: \"kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.247904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.247945 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.248637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.248896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.272969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm2rx\" (UniqueName: \"kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx\") pod \"redhat-operators-w2lgf\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:20 crc kubenswrapper[4860]: I0121 21:54:20.359831 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:21 crc kubenswrapper[4860]: I0121 21:54:21.002220 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:21 crc kubenswrapper[4860]: I0121 21:54:21.988050 4860 generic.go:334] "Generic (PLEG): container finished" podID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerID="59d5757625aeff4fc2c2c1d5ad99a278d91d7f782e4eecd924d80629696e1c16" exitCode=0 Jan 21 21:54:21 crc kubenswrapper[4860]: I0121 21:54:21.988487 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerDied","Data":"59d5757625aeff4fc2c2c1d5ad99a278d91d7f782e4eecd924d80629696e1c16"} Jan 21 21:54:21 crc kubenswrapper[4860]: I0121 21:54:21.988540 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerStarted","Data":"4a7f6cdf228d1d596c3787d3fca3d56b8755811823fecb8ebf2467bf7fb8297a"} Jan 21 21:54:21 crc kubenswrapper[4860]: I0121 21:54:21.991490 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 21:54:24 crc kubenswrapper[4860]: I0121 21:54:24.012568 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerStarted","Data":"52b090d8d1d306c324e4fb0ee9d589be50702567643d5a7826a88522fb574cc1"} Jan 21 21:54:25 crc kubenswrapper[4860]: I0121 21:54:25.029412 4860 generic.go:334] "Generic (PLEG): container finished" podID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerID="52b090d8d1d306c324e4fb0ee9d589be50702567643d5a7826a88522fb574cc1" exitCode=0 Jan 21 21:54:25 crc kubenswrapper[4860]: I0121 21:54:25.029476 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerDied","Data":"52b090d8d1d306c324e4fb0ee9d589be50702567643d5a7826a88522fb574cc1"} Jan 21 21:54:26 crc kubenswrapper[4860]: I0121 21:54:26.041126 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerStarted","Data":"62dc933a68ccbdda524c3a27deb07d86e8765f32398c4b92916e61352d28f9cd"} Jan 21 21:54:26 crc kubenswrapper[4860]: I0121 21:54:26.066222 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2lgf" podStartSLOduration=2.545325745 podStartE2EDuration="6.066190458s" podCreationTimestamp="2026-01-21 21:54:20 +0000 UTC" firstStartedPulling="2026-01-21 21:54:21.99118106 +0000 UTC m=+2754.213359530" lastFinishedPulling="2026-01-21 21:54:25.512045773 +0000 UTC m=+2757.734224243" observedRunningTime="2026-01-21 21:54:26.060899534 +0000 UTC m=+2758.283078004" watchObservedRunningTime="2026-01-21 21:54:26.066190458 +0000 UTC m=+2758.288368918" Jan 21 21:54:30 crc kubenswrapper[4860]: I0121 21:54:30.362269 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:30 crc kubenswrapper[4860]: I0121 21:54:30.362578 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:31 crc kubenswrapper[4860]: I0121 21:54:31.424662 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w2lgf" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="registry-server" probeResult="failure" output=< Jan 21 21:54:31 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 21:54:31 crc kubenswrapper[4860]: > Jan 21 21:54:32 crc kubenswrapper[4860]: I0121 21:54:32.104044 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:54:32 crc kubenswrapper[4860]: I0121 21:54:32.104517 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:54:32 crc kubenswrapper[4860]: I0121 21:54:32.104656 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:54:32 crc kubenswrapper[4860]: I0121 21:54:32.105725 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:54:32 crc kubenswrapper[4860]: I0121 21:54:32.105887 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46" gracePeriod=600 Jan 21 21:54:33 crc kubenswrapper[4860]: I0121 21:54:33.128802 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46" exitCode=0 Jan 21 21:54:33 crc kubenswrapper[4860]: I0121 21:54:33.128867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46"} Jan 21 21:54:33 crc kubenswrapper[4860]: I0121 21:54:33.130434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768"} Jan 21 21:54:33 crc kubenswrapper[4860]: I0121 21:54:33.130504 4860 scope.go:117] "RemoveContainer" containerID="f9590efd3541a351caa6d5386bd8996e74d4fb9f11d41dcdb089b2a54027a02b" Jan 21 21:54:40 crc kubenswrapper[4860]: I0121 21:54:40.424341 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:40 crc kubenswrapper[4860]: I0121 21:54:40.483253 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.012563 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.017210 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2lgf" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="registry-server" containerID="cri-o://62dc933a68ccbdda524c3a27deb07d86e8765f32398c4b92916e61352d28f9cd" gracePeriod=2 Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.242000 4860 generic.go:334] "Generic (PLEG): container finished" podID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerID="62dc933a68ccbdda524c3a27deb07d86e8765f32398c4b92916e61352d28f9cd" exitCode=0 Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.242164 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerDied","Data":"62dc933a68ccbdda524c3a27deb07d86e8765f32398c4b92916e61352d28f9cd"} Jan 21 21:54:44 crc kubenswrapper[4860]: E0121 21:54:44.581205 4860 log.go:32] "Failed when writing line to log file" err="http2: stream closed" path="/var/log/pods/openshift-must-gather-cfk7p_must-gather-rrm5z_9e27f7cf-a2f7-4552-8d62-88945d618163/gather/0.log" line={} Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.662631 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.796358 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm2rx\" (UniqueName: \"kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx\") pod \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.796795 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities\") pod \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.796819 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content\") pod \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\" (UID: \"d2056b14-7def-4a7c-8541-ee9094cc6eb2\") " Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.797910 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities" (OuterVolumeSpecName: "utilities") pod "d2056b14-7def-4a7c-8541-ee9094cc6eb2" (UID: "d2056b14-7def-4a7c-8541-ee9094cc6eb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.804801 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx" (OuterVolumeSpecName: "kube-api-access-jm2rx") pod "d2056b14-7def-4a7c-8541-ee9094cc6eb2" (UID: "d2056b14-7def-4a7c-8541-ee9094cc6eb2"). InnerVolumeSpecName "kube-api-access-jm2rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.823962 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm2rx\" (UniqueName: \"kubernetes.io/projected/d2056b14-7def-4a7c-8541-ee9094cc6eb2-kube-api-access-jm2rx\") on node \"crc\" DevicePath \"\"" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.824014 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:54:44 crc kubenswrapper[4860]: I0121 21:54:44.944080 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2056b14-7def-4a7c-8541-ee9094cc6eb2" (UID: "d2056b14-7def-4a7c-8541-ee9094cc6eb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.028039 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2056b14-7def-4a7c-8541-ee9094cc6eb2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.262849 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2lgf" event={"ID":"d2056b14-7def-4a7c-8541-ee9094cc6eb2","Type":"ContainerDied","Data":"4a7f6cdf228d1d596c3787d3fca3d56b8755811823fecb8ebf2467bf7fb8297a"} Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.262910 4860 scope.go:117] "RemoveContainer" containerID="62dc933a68ccbdda524c3a27deb07d86e8765f32398c4b92916e61352d28f9cd" Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.263089 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2lgf" Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.297258 4860 scope.go:117] "RemoveContainer" containerID="52b090d8d1d306c324e4fb0ee9d589be50702567643d5a7826a88522fb574cc1" Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.313104 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.322117 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2lgf"] Jan 21 21:54:45 crc kubenswrapper[4860]: I0121 21:54:45.353141 4860 scope.go:117] "RemoveContainer" containerID="59d5757625aeff4fc2c2c1d5ad99a278d91d7f782e4eecd924d80629696e1c16" Jan 21 21:54:46 crc kubenswrapper[4860]: I0121 21:54:46.721623 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" path="/var/lib/kubelet/pods/d2056b14-7def-4a7c-8541-ee9094cc6eb2/volumes" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.075467 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:22 crc kubenswrapper[4860]: E0121 21:56:22.077019 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="registry-server" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.077049 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="registry-server" Jan 21 21:56:22 crc kubenswrapper[4860]: E0121 21:56:22.077078 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="extract-utilities" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.077088 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="extract-utilities" Jan 21 21:56:22 crc kubenswrapper[4860]: E0121 21:56:22.077114 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="extract-content" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.077123 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="extract-content" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.077394 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2056b14-7def-4a7c-8541-ee9094cc6eb2" containerName="registry-server" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.079365 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.101146 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.191869 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.191989 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.192185 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqq4f\" (UniqueName: \"kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.294512 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqq4f\" (UniqueName: \"kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.294646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.294681 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.295605 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.295605 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.319303 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqq4f\" (UniqueName: \"kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f\") pod \"redhat-marketplace-2qjhd\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.401649 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:22 crc kubenswrapper[4860]: I0121 21:56:22.976518 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:23 crc kubenswrapper[4860]: I0121 21:56:23.363686 4860 generic.go:334] "Generic (PLEG): container finished" podID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerID="00e39b406928fac67bcbd0eef073a2e4c36015750bb4f81bd6986d0de6cda4c6" exitCode=0 Jan 21 21:56:23 crc kubenswrapper[4860]: I0121 21:56:23.363751 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerDied","Data":"00e39b406928fac67bcbd0eef073a2e4c36015750bb4f81bd6986d0de6cda4c6"} Jan 21 21:56:23 crc kubenswrapper[4860]: I0121 21:56:23.363805 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerStarted","Data":"84166ba07649df3a84f03aec1ec7717089ed6db7dcb5ad82672ca55cb0389645"} Jan 21 21:56:24 crc kubenswrapper[4860]: I0121 21:56:24.390430 4860 generic.go:334] "Generic (PLEG): container finished" podID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerID="ecc33193612046c411da9a41fdfb11213333fb181ad229a6d232b382260e0fda" exitCode=0 Jan 21 21:56:24 crc kubenswrapper[4860]: I0121 21:56:24.390833 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerDied","Data":"ecc33193612046c411da9a41fdfb11213333fb181ad229a6d232b382260e0fda"} Jan 21 21:56:25 crc kubenswrapper[4860]: I0121 21:56:25.405075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerStarted","Data":"7a7fd788ed05fc0f395cfb563fb4d4be47ea2b01d3f8c3a797cf4929653baae4"} Jan 21 21:56:25 crc kubenswrapper[4860]: I0121 21:56:25.443843 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2qjhd" podStartSLOduration=2.015940477 podStartE2EDuration="3.443819396s" podCreationTimestamp="2026-01-21 21:56:22 +0000 UTC" firstStartedPulling="2026-01-21 21:56:23.365367454 +0000 UTC m=+2875.587545924" lastFinishedPulling="2026-01-21 21:56:24.793246373 +0000 UTC m=+2877.015424843" observedRunningTime="2026-01-21 21:56:25.433792746 +0000 UTC m=+2877.655971226" watchObservedRunningTime="2026-01-21 21:56:25.443819396 +0000 UTC m=+2877.665997866" Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.103434 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.104161 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.402873 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.402964 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.456746 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:32 crc kubenswrapper[4860]: I0121 21:56:32.537141 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:36 crc kubenswrapper[4860]: I0121 21:56:36.004133 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:36 crc kubenswrapper[4860]: I0121 21:56:36.005530 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2qjhd" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="registry-server" containerID="cri-o://7a7fd788ed05fc0f395cfb563fb4d4be47ea2b01d3f8c3a797cf4929653baae4" gracePeriod=2 Jan 21 21:56:36 crc kubenswrapper[4860]: I0121 21:56:36.523844 4860 generic.go:334] "Generic (PLEG): container finished" podID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerID="7a7fd788ed05fc0f395cfb563fb4d4be47ea2b01d3f8c3a797cf4929653baae4" exitCode=0 Jan 21 21:56:36 crc kubenswrapper[4860]: I0121 21:56:36.523902 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerDied","Data":"7a7fd788ed05fc0f395cfb563fb4d4be47ea2b01d3f8c3a797cf4929653baae4"} Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.131094 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.237173 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content\") pod \"42f14ba3-72c2-4e55-b65c-e202c25bafae\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.237482 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities\") pod \"42f14ba3-72c2-4e55-b65c-e202c25bafae\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.237523 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqq4f\" (UniqueName: \"kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f\") pod \"42f14ba3-72c2-4e55-b65c-e202c25bafae\" (UID: \"42f14ba3-72c2-4e55-b65c-e202c25bafae\") " Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.238366 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities" (OuterVolumeSpecName: "utilities") pod "42f14ba3-72c2-4e55-b65c-e202c25bafae" (UID: "42f14ba3-72c2-4e55-b65c-e202c25bafae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.238953 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.259823 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f" (OuterVolumeSpecName: "kube-api-access-kqq4f") pod "42f14ba3-72c2-4e55-b65c-e202c25bafae" (UID: "42f14ba3-72c2-4e55-b65c-e202c25bafae"). InnerVolumeSpecName "kube-api-access-kqq4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.262117 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42f14ba3-72c2-4e55-b65c-e202c25bafae" (UID: "42f14ba3-72c2-4e55-b65c-e202c25bafae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.341051 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqq4f\" (UniqueName: \"kubernetes.io/projected/42f14ba3-72c2-4e55-b65c-e202c25bafae-kube-api-access-kqq4f\") on node \"crc\" DevicePath \"\"" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.341104 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f14ba3-72c2-4e55-b65c-e202c25bafae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.535501 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qjhd" event={"ID":"42f14ba3-72c2-4e55-b65c-e202c25bafae","Type":"ContainerDied","Data":"84166ba07649df3a84f03aec1ec7717089ed6db7dcb5ad82672ca55cb0389645"} Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.535590 4860 scope.go:117] "RemoveContainer" containerID="7a7fd788ed05fc0f395cfb563fb4d4be47ea2b01d3f8c3a797cf4929653baae4" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.535600 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qjhd" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.583203 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.590100 4860 scope.go:117] "RemoveContainer" containerID="ecc33193612046c411da9a41fdfb11213333fb181ad229a6d232b382260e0fda" Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.594551 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qjhd"] Jan 21 21:56:37 crc kubenswrapper[4860]: I0121 21:56:37.611421 4860 scope.go:117] "RemoveContainer" containerID="00e39b406928fac67bcbd0eef073a2e4c36015750bb4f81bd6986d0de6cda4c6" Jan 21 21:56:38 crc kubenswrapper[4860]: I0121 21:56:38.593800 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" path="/var/lib/kubelet/pods/42f14ba3-72c2-4e55-b65c-e202c25bafae/volumes" Jan 21 21:57:02 crc kubenswrapper[4860]: I0121 21:57:02.104057 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:57:02 crc kubenswrapper[4860]: I0121 21:57:02.111226 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:57:32 crc kubenswrapper[4860]: I0121 21:57:32.104076 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 21:57:32 crc kubenswrapper[4860]: I0121 21:57:32.104811 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 21:57:32 crc kubenswrapper[4860]: I0121 21:57:32.104868 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 21:57:32 crc kubenswrapper[4860]: I0121 21:57:32.105909 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 21:57:32 crc kubenswrapper[4860]: I0121 21:57:32.105991 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" gracePeriod=600 Jan 21 21:57:32 crc kubenswrapper[4860]: E0121 21:57:32.234980 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:57:33 crc kubenswrapper[4860]: I0121 21:57:33.069316 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" exitCode=0 Jan 21 21:57:33 crc kubenswrapper[4860]: I0121 21:57:33.069350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768"} Jan 21 21:57:33 crc kubenswrapper[4860]: I0121 21:57:33.069411 4860 scope.go:117] "RemoveContainer" containerID="0fd864151352182028c78e885df43511ab90dfab92dd7571f77d8647938a6d46" Jan 21 21:57:33 crc kubenswrapper[4860]: I0121 21:57:33.070323 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:57:33 crc kubenswrapper[4860]: E0121 21:57:33.070598 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.022200 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:43 crc kubenswrapper[4860]: E0121 21:57:43.027919 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="extract-content" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.028006 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="extract-content" Jan 21 21:57:43 crc kubenswrapper[4860]: E0121 21:57:43.028038 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="extract-utilities" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.028048 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="extract-utilities" Jan 21 21:57:43 crc kubenswrapper[4860]: E0121 21:57:43.028067 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="registry-server" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.028076 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="registry-server" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.028344 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f14ba3-72c2-4e55-b65c-e202c25bafae" containerName="registry-server" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.030148 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.043388 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.084412 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.084557 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.084620 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzgh\" (UniqueName: \"kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.186976 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tzgh\" (UniqueName: \"kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.187215 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.187331 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.187979 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.188046 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.213057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tzgh\" (UniqueName: \"kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh\") pod \"certified-operators-t6jj9\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:43 crc kubenswrapper[4860]: I0121 21:57:43.366594 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:44 crc kubenswrapper[4860]: I0121 21:57:44.011579 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:44 crc kubenswrapper[4860]: I0121 21:57:44.169156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerStarted","Data":"8ad1cc5feb4db449ce6718d0b7dbe0dd27dabadcfbae0ab4165b0305a95c594c"} Jan 21 21:57:45 crc kubenswrapper[4860]: I0121 21:57:45.182534 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerID="2bbecb6a81ec700bca9f470fb90aa5f6cd45e1c56732de836e39368d98604a7b" exitCode=0 Jan 21 21:57:45 crc kubenswrapper[4860]: I0121 21:57:45.182758 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerDied","Data":"2bbecb6a81ec700bca9f470fb90aa5f6cd45e1c56732de836e39368d98604a7b"} Jan 21 21:57:46 crc kubenswrapper[4860]: I0121 21:57:46.193278 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerStarted","Data":"e473431265848f0fd53e4486eec3a8f083b4ce4ddebdb63a2fd43cb010956d0c"} Jan 21 21:57:47 crc kubenswrapper[4860]: I0121 21:57:47.203031 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerID="e473431265848f0fd53e4486eec3a8f083b4ce4ddebdb63a2fd43cb010956d0c" exitCode=0 Jan 21 21:57:47 crc kubenswrapper[4860]: I0121 21:57:47.203078 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerDied","Data":"e473431265848f0fd53e4486eec3a8f083b4ce4ddebdb63a2fd43cb010956d0c"} Jan 21 21:57:48 crc kubenswrapper[4860]: I0121 21:57:48.217492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerStarted","Data":"b4af755e5d7b6a1c28924a6004eb36f7abc28c6aece9848a0817cb78200475c4"} Jan 21 21:57:48 crc kubenswrapper[4860]: I0121 21:57:48.242881 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6jj9" podStartSLOduration=3.854527236 podStartE2EDuration="6.242831567s" podCreationTimestamp="2026-01-21 21:57:42 +0000 UTC" firstStartedPulling="2026-01-21 21:57:45.18483969 +0000 UTC m=+2957.407018160" lastFinishedPulling="2026-01-21 21:57:47.573144021 +0000 UTC m=+2959.795322491" observedRunningTime="2026-01-21 21:57:48.239153573 +0000 UTC m=+2960.461332053" watchObservedRunningTime="2026-01-21 21:57:48.242831567 +0000 UTC m=+2960.465010037" Jan 21 21:57:48 crc kubenswrapper[4860]: I0121 21:57:48.587162 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:57:48 crc kubenswrapper[4860]: E0121 21:57:48.587541 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:57:53 crc kubenswrapper[4860]: I0121 21:57:53.367681 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:53 crc kubenswrapper[4860]: I0121 21:57:53.368129 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:53 crc kubenswrapper[4860]: I0121 21:57:53.440571 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:54 crc kubenswrapper[4860]: I0121 21:57:54.323421 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:57 crc kubenswrapper[4860]: I0121 21:57:57.008994 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:57 crc kubenswrapper[4860]: I0121 21:57:57.009686 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6jj9" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="registry-server" containerID="cri-o://b4af755e5d7b6a1c28924a6004eb36f7abc28c6aece9848a0817cb78200475c4" gracePeriod=2 Jan 21 21:57:57 crc kubenswrapper[4860]: I0121 21:57:57.303812 4860 generic.go:334] "Generic (PLEG): container finished" podID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerID="b4af755e5d7b6a1c28924a6004eb36f7abc28c6aece9848a0817cb78200475c4" exitCode=0 Jan 21 21:57:57 crc kubenswrapper[4860]: I0121 21:57:57.303905 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerDied","Data":"b4af755e5d7b6a1c28924a6004eb36f7abc28c6aece9848a0817cb78200475c4"} Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.045186 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.130246 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tzgh\" (UniqueName: \"kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh\") pod \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.130380 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content\") pod \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.130439 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities\") pod \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\" (UID: \"e2e36e62-8efb-4c94-a398-5d5b7b21b92d\") " Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.131772 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities" (OuterVolumeSpecName: "utilities") pod "e2e36e62-8efb-4c94-a398-5d5b7b21b92d" (UID: "e2e36e62-8efb-4c94-a398-5d5b7b21b92d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.147923 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh" (OuterVolumeSpecName: "kube-api-access-4tzgh") pod "e2e36e62-8efb-4c94-a398-5d5b7b21b92d" (UID: "e2e36e62-8efb-4c94-a398-5d5b7b21b92d"). InnerVolumeSpecName "kube-api-access-4tzgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.189408 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2e36e62-8efb-4c94-a398-5d5b7b21b92d" (UID: "e2e36e62-8efb-4c94-a398-5d5b7b21b92d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.232665 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.232705 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.232717 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tzgh\" (UniqueName: \"kubernetes.io/projected/e2e36e62-8efb-4c94-a398-5d5b7b21b92d-kube-api-access-4tzgh\") on node \"crc\" DevicePath \"\"" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.317567 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6jj9" event={"ID":"e2e36e62-8efb-4c94-a398-5d5b7b21b92d","Type":"ContainerDied","Data":"8ad1cc5feb4db449ce6718d0b7dbe0dd27dabadcfbae0ab4165b0305a95c594c"} Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.317635 4860 scope.go:117] "RemoveContainer" containerID="b4af755e5d7b6a1c28924a6004eb36f7abc28c6aece9848a0817cb78200475c4" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.317661 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6jj9" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.358303 4860 scope.go:117] "RemoveContainer" containerID="e473431265848f0fd53e4486eec3a8f083b4ce4ddebdb63a2fd43cb010956d0c" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.363394 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.372857 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6jj9"] Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.380102 4860 scope.go:117] "RemoveContainer" containerID="2bbecb6a81ec700bca9f470fb90aa5f6cd45e1c56732de836e39368d98604a7b" Jan 21 21:57:58 crc kubenswrapper[4860]: I0121 21:57:58.599401 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" path="/var/lib/kubelet/pods/e2e36e62-8efb-4c94-a398-5d5b7b21b92d/volumes" Jan 21 21:58:01 crc kubenswrapper[4860]: I0121 21:58:01.585699 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:58:01 crc kubenswrapper[4860]: E0121 21:58:01.586569 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:58:12 crc kubenswrapper[4860]: I0121 21:58:12.581619 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:58:12 crc kubenswrapper[4860]: E0121 21:58:12.582549 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:58:27 crc kubenswrapper[4860]: I0121 21:58:27.579835 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:58:27 crc kubenswrapper[4860]: E0121 21:58:27.581003 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:58:39 crc kubenswrapper[4860]: I0121 21:58:39.578844 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:58:39 crc kubenswrapper[4860]: E0121 21:58:39.579961 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:58:52 crc kubenswrapper[4860]: I0121 21:58:52.579283 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:58:52 crc kubenswrapper[4860]: E0121 21:58:52.580089 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:59:06 crc kubenswrapper[4860]: I0121 21:59:06.579583 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:59:06 crc kubenswrapper[4860]: E0121 21:59:06.580328 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:59:19 crc kubenswrapper[4860]: I0121 21:59:19.579879 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:59:19 crc kubenswrapper[4860]: E0121 21:59:19.580993 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:59:30 crc kubenswrapper[4860]: I0121 21:59:30.711989 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:59:30 crc kubenswrapper[4860]: E0121 21:59:30.713254 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:59:43 crc kubenswrapper[4860]: I0121 21:59:43.580122 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:59:43 crc kubenswrapper[4860]: E0121 21:59:43.581851 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 21:59:57 crc kubenswrapper[4860]: I0121 21:59:57.580179 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 21:59:57 crc kubenswrapper[4860]: E0121 21:59:57.581402 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.162724 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4"] Jan 21 22:00:00 crc kubenswrapper[4860]: E0121 22:00:00.163594 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="extract-utilities" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.163616 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="extract-utilities" Jan 21 22:00:00 crc kubenswrapper[4860]: E0121 22:00:00.163630 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="extract-content" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.163637 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="extract-content" Jan 21 22:00:00 crc kubenswrapper[4860]: E0121 22:00:00.163655 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="registry-server" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.163662 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="registry-server" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.163873 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e36e62-8efb-4c94-a398-5d5b7b21b92d" containerName="registry-server" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.164636 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.167488 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.168827 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.189274 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4"] Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.252181 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.252252 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvzh\" (UniqueName: \"kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.252299 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.354540 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvzh\" (UniqueName: \"kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.354670 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.354850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.356643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.402353 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.410135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvzh\" (UniqueName: \"kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh\") pod \"collect-profiles-29483880-b48g4\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:00 crc kubenswrapper[4860]: I0121 22:00:00.494405 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:01 crc kubenswrapper[4860]: I0121 22:00:01.041000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4"] Jan 21 22:00:01 crc kubenswrapper[4860]: I0121 22:00:01.952756 4860 generic.go:334] "Generic (PLEG): container finished" podID="00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" containerID="a869d57e3d431f2f33c2aa2ec3b5f9535fae263d8ffe4634e6377c8f563f8e1d" exitCode=0 Jan 21 22:00:01 crc kubenswrapper[4860]: I0121 22:00:01.952853 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" event={"ID":"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e","Type":"ContainerDied","Data":"a869d57e3d431f2f33c2aa2ec3b5f9535fae263d8ffe4634e6377c8f563f8e1d"} Jan 21 22:00:01 crc kubenswrapper[4860]: I0121 22:00:01.953193 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" event={"ID":"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e","Type":"ContainerStarted","Data":"fae74939a9e219674e457e113ef9cfa434130c72a9dc35d5e9962dd5c7f330ec"} Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.313366 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.432926 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume\") pod \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.433177 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlvzh\" (UniqueName: \"kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh\") pod \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.433208 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume\") pod \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\" (UID: \"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e\") " Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.435220 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" (UID: "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.441332 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" (UID: "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.442508 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh" (OuterVolumeSpecName: "kube-api-access-rlvzh") pod "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" (UID: "00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e"). InnerVolumeSpecName "kube-api-access-rlvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.535976 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlvzh\" (UniqueName: \"kubernetes.io/projected/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-kube-api-access-rlvzh\") on node \"crc\" DevicePath \"\"" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.536037 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.536054 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.976397 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" event={"ID":"00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e","Type":"ContainerDied","Data":"fae74939a9e219674e457e113ef9cfa434130c72a9dc35d5e9962dd5c7f330ec"} Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.976461 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae74939a9e219674e457e113ef9cfa434130c72a9dc35d5e9962dd5c7f330ec" Jan 21 22:00:03 crc kubenswrapper[4860]: I0121 22:00:03.976518 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483880-b48g4" Jan 21 22:00:04 crc kubenswrapper[4860]: I0121 22:00:04.430189 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs"] Jan 21 22:00:04 crc kubenswrapper[4860]: I0121 22:00:04.438248 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483835-2x7rs"] Jan 21 22:00:04 crc kubenswrapper[4860]: I0121 22:00:04.602056 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a" path="/var/lib/kubelet/pods/2d2bb7b3-a9c3-4994-9085-0a5fd5b67c6a/volumes" Jan 21 22:00:12 crc kubenswrapper[4860]: I0121 22:00:12.579737 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:00:12 crc kubenswrapper[4860]: E0121 22:00:12.582307 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:00:24 crc kubenswrapper[4860]: I0121 22:00:24.580873 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:00:24 crc kubenswrapper[4860]: E0121 22:00:24.581917 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:00:35 crc kubenswrapper[4860]: I0121 22:00:35.579192 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:00:35 crc kubenswrapper[4860]: E0121 22:00:35.580070 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:00:43 crc kubenswrapper[4860]: I0121 22:00:43.290626 4860 scope.go:117] "RemoveContainer" containerID="a6a1df94e7fce71982911853f7701b3f5bbcde0bf5bd5b62361a2d2a9da5ebbf" Jan 21 22:00:49 crc kubenswrapper[4860]: I0121 22:00:49.579138 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:00:49 crc kubenswrapper[4860]: E0121 22:00:49.580408 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.155982 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29483881-ds77d"] Jan 21 22:01:00 crc kubenswrapper[4860]: E0121 22:01:00.157432 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" containerName="collect-profiles" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.157452 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" containerName="collect-profiles" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.157654 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="00903b4e-e7cb-4c8a-9ca9-43637d4cdf2e" containerName="collect-profiles" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.158448 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.195422 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29483881-ds77d"] Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.336122 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b6k\" (UniqueName: \"kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.336204 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.336232 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.336273 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.336346 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.438096 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2b6k\" (UniqueName: \"kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.438189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.438227 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.438282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.438374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.451906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.451979 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.452969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.461378 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.472073 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2b6k\" (UniqueName: \"kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k\") pod \"keystone-cron-29483881-ds77d\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:00 crc kubenswrapper[4860]: I0121 22:01:00.481004 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:01 crc kubenswrapper[4860]: I0121 22:01:01.028755 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29483881-ds77d"] Jan 21 22:01:01 crc kubenswrapper[4860]: I0121 22:01:01.572988 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" event={"ID":"62974097-bc1f-49ab-8b89-2c2f1f7b3c58","Type":"ContainerStarted","Data":"26e07f38ef9ba7f09b0c197a6389136714d0849c7440400f62b7205a00eadb27"} Jan 21 22:01:01 crc kubenswrapper[4860]: I0121 22:01:01.573552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" event={"ID":"62974097-bc1f-49ab-8b89-2c2f1f7b3c58","Type":"ContainerStarted","Data":"b5cfcd59b166f28386cd69f7ba1e57d1519178f2b068e2419e673b81eeb782bd"} Jan 21 22:01:01 crc kubenswrapper[4860]: I0121 22:01:01.600545 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" podStartSLOduration=1.600502535 podStartE2EDuration="1.600502535s" podCreationTimestamp="2026-01-21 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 22:01:01.59585035 +0000 UTC m=+3153.818028820" watchObservedRunningTime="2026-01-21 22:01:01.600502535 +0000 UTC m=+3153.822681005" Jan 21 22:01:02 crc kubenswrapper[4860]: I0121 22:01:02.580221 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:01:02 crc kubenswrapper[4860]: E0121 22:01:02.581021 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:04 crc kubenswrapper[4860]: I0121 22:01:04.607830 4860 generic.go:334] "Generic (PLEG): container finished" podID="62974097-bc1f-49ab-8b89-2c2f1f7b3c58" containerID="26e07f38ef9ba7f09b0c197a6389136714d0849c7440400f62b7205a00eadb27" exitCode=0 Jan 21 22:01:04 crc kubenswrapper[4860]: I0121 22:01:04.607930 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" event={"ID":"62974097-bc1f-49ab-8b89-2c2f1f7b3c58","Type":"ContainerDied","Data":"26e07f38ef9ba7f09b0c197a6389136714d0849c7440400f62b7205a00eadb27"} Jan 21 22:01:05 crc kubenswrapper[4860]: I0121 22:01:05.991390 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.189485 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2b6k\" (UniqueName: \"kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k\") pod \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.189589 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data\") pod \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.189694 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls\") pod \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.189729 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys\") pod \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.189812 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle\") pod \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\" (UID: \"62974097-bc1f-49ab-8b89-2c2f1f7b3c58\") " Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.206439 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k" (OuterVolumeSpecName: "kube-api-access-p2b6k") pod "62974097-bc1f-49ab-8b89-2c2f1f7b3c58" (UID: "62974097-bc1f-49ab-8b89-2c2f1f7b3c58"). InnerVolumeSpecName "kube-api-access-p2b6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.210719 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62974097-bc1f-49ab-8b89-2c2f1f7b3c58" (UID: "62974097-bc1f-49ab-8b89-2c2f1f7b3c58"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.231863 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62974097-bc1f-49ab-8b89-2c2f1f7b3c58" (UID: "62974097-bc1f-49ab-8b89-2c2f1f7b3c58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.254200 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data" (OuterVolumeSpecName: "config-data") pod "62974097-bc1f-49ab-8b89-2c2f1f7b3c58" (UID: "62974097-bc1f-49ab-8b89-2c2f1f7b3c58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.277718 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "62974097-bc1f-49ab-8b89-2c2f1f7b3c58" (UID: "62974097-bc1f-49ab-8b89-2c2f1f7b3c58"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.292920 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2b6k\" (UniqueName: \"kubernetes.io/projected/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-kube-api-access-p2b6k\") on node \"crc\" DevicePath \"\"" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.292992 4860 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.293007 4860 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.293019 4860 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.293030 4860 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62974097-bc1f-49ab-8b89-2c2f1f7b3c58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.630906 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" event={"ID":"62974097-bc1f-49ab-8b89-2c2f1f7b3c58","Type":"ContainerDied","Data":"b5cfcd59b166f28386cd69f7ba1e57d1519178f2b068e2419e673b81eeb782bd"} Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.631211 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5cfcd59b166f28386cd69f7ba1e57d1519178f2b068e2419e673b81eeb782bd" Jan 21 22:01:06 crc kubenswrapper[4860]: I0121 22:01:06.630993 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29483881-ds77d" Jan 21 22:01:16 crc kubenswrapper[4860]: I0121 22:01:16.579985 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:01:16 crc kubenswrapper[4860]: E0121 22:01:16.581077 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:27 crc kubenswrapper[4860]: I0121 22:01:27.579336 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:01:27 crc kubenswrapper[4860]: E0121 22:01:27.580415 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:39 crc kubenswrapper[4860]: I0121 22:01:39.579685 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:01:39 crc kubenswrapper[4860]: E0121 22:01:39.580857 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.423830 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:01:48 crc kubenswrapper[4860]: E0121 22:01:48.425094 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62974097-bc1f-49ab-8b89-2c2f1f7b3c58" containerName="keystone-cron" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.425112 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="62974097-bc1f-49ab-8b89-2c2f1f7b3c58" containerName="keystone-cron" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.425320 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="62974097-bc1f-49ab-8b89-2c2f1f7b3c58" containerName="keystone-cron" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.447373 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.447612 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.605420 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.605492 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzb2f\" (UniqueName: \"kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.605554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.707542 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.707667 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzb2f\" (UniqueName: \"kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.707777 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.709415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.709800 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.736306 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzb2f\" (UniqueName: \"kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f\") pod \"community-operators-dqm2x\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:48 crc kubenswrapper[4860]: I0121 22:01:48.786625 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:49 crc kubenswrapper[4860]: I0121 22:01:49.374742 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:01:50 crc kubenswrapper[4860]: I0121 22:01:50.107836 4860 generic.go:334] "Generic (PLEG): container finished" podID="03510882-13a9-4490-841f-3704c415e49d" containerID="fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a" exitCode=0 Jan 21 22:01:50 crc kubenswrapper[4860]: I0121 22:01:50.107964 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerDied","Data":"fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a"} Jan 21 22:01:50 crc kubenswrapper[4860]: I0121 22:01:50.108368 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerStarted","Data":"b034c6e6b102ee56f3cec84fe0349eedeaeba93b0afe8ccced086d54528bb844"} Jan 21 22:01:50 crc kubenswrapper[4860]: I0121 22:01:50.111399 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 22:01:50 crc kubenswrapper[4860]: I0121 22:01:50.579283 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:01:50 crc kubenswrapper[4860]: E0121 22:01:50.580246 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:01:51 crc kubenswrapper[4860]: I0121 22:01:51.119715 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerStarted","Data":"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245"} Jan 21 22:01:52 crc kubenswrapper[4860]: I0121 22:01:52.135088 4860 generic.go:334] "Generic (PLEG): container finished" podID="03510882-13a9-4490-841f-3704c415e49d" containerID="b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245" exitCode=0 Jan 21 22:01:52 crc kubenswrapper[4860]: I0121 22:01:52.135243 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerDied","Data":"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245"} Jan 21 22:01:53 crc kubenswrapper[4860]: I0121 22:01:53.150908 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerStarted","Data":"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615"} Jan 21 22:01:53 crc kubenswrapper[4860]: I0121 22:01:53.181272 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqm2x" podStartSLOduration=2.738803886 podStartE2EDuration="5.181216767s" podCreationTimestamp="2026-01-21 22:01:48 +0000 UTC" firstStartedPulling="2026-01-21 22:01:50.110729926 +0000 UTC m=+3202.332908396" lastFinishedPulling="2026-01-21 22:01:52.553142797 +0000 UTC m=+3204.775321277" observedRunningTime="2026-01-21 22:01:53.179272637 +0000 UTC m=+3205.401451117" watchObservedRunningTime="2026-01-21 22:01:53.181216767 +0000 UTC m=+3205.403395237" Jan 21 22:01:58 crc kubenswrapper[4860]: I0121 22:01:58.787412 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:58 crc kubenswrapper[4860]: I0121 22:01:58.788752 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:58 crc kubenswrapper[4860]: I0121 22:01:58.880354 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:01:59 crc kubenswrapper[4860]: I0121 22:01:59.279492 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:02:00 crc kubenswrapper[4860]: I0121 22:02:00.810410 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:02:01 crc kubenswrapper[4860]: I0121 22:02:01.248997 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqm2x" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="registry-server" containerID="cri-o://b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615" gracePeriod=2 Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.227299 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.266689 4860 generic.go:334] "Generic (PLEG): container finished" podID="03510882-13a9-4490-841f-3704c415e49d" containerID="b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615" exitCode=0 Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.266756 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerDied","Data":"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615"} Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.266800 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqm2x" event={"ID":"03510882-13a9-4490-841f-3704c415e49d","Type":"ContainerDied","Data":"b034c6e6b102ee56f3cec84fe0349eedeaeba93b0afe8ccced086d54528bb844"} Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.266827 4860 scope.go:117] "RemoveContainer" containerID="b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.267090 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqm2x" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.295162 4860 scope.go:117] "RemoveContainer" containerID="b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.299384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities\") pod \"03510882-13a9-4490-841f-3704c415e49d\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.299671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzb2f\" (UniqueName: \"kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f\") pod \"03510882-13a9-4490-841f-3704c415e49d\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.299836 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content\") pod \"03510882-13a9-4490-841f-3704c415e49d\" (UID: \"03510882-13a9-4490-841f-3704c415e49d\") " Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.305382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities" (OuterVolumeSpecName: "utilities") pod "03510882-13a9-4490-841f-3704c415e49d" (UID: "03510882-13a9-4490-841f-3704c415e49d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.314115 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f" (OuterVolumeSpecName: "kube-api-access-gzb2f") pod "03510882-13a9-4490-841f-3704c415e49d" (UID: "03510882-13a9-4490-841f-3704c415e49d"). InnerVolumeSpecName "kube-api-access-gzb2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.323030 4860 scope.go:117] "RemoveContainer" containerID="fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.367474 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03510882-13a9-4490-841f-3704c415e49d" (UID: "03510882-13a9-4490-841f-3704c415e49d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.397251 4860 scope.go:117] "RemoveContainer" containerID="b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615" Jan 21 22:02:02 crc kubenswrapper[4860]: E0121 22:02:02.397847 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615\": container with ID starting with b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615 not found: ID does not exist" containerID="b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.397885 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615"} err="failed to get container status \"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615\": rpc error: code = NotFound desc = could not find container \"b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615\": container with ID starting with b3a433852d58748dae2ce0311e2c0664bac4c1139d39144c1c6ba592d3fe1615 not found: ID does not exist" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.397925 4860 scope.go:117] "RemoveContainer" containerID="b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245" Jan 21 22:02:02 crc kubenswrapper[4860]: E0121 22:02:02.398662 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245\": container with ID starting with b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245 not found: ID does not exist" containerID="b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.398738 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245"} err="failed to get container status \"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245\": rpc error: code = NotFound desc = could not find container \"b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245\": container with ID starting with b4dcb821a61e11b89645472453397590c4d060ffd74b07742fc54ef9bedae245 not found: ID does not exist" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.398843 4860 scope.go:117] "RemoveContainer" containerID="fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a" Jan 21 22:02:02 crc kubenswrapper[4860]: E0121 22:02:02.399316 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a\": container with ID starting with fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a not found: ID does not exist" containerID="fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.399372 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a"} err="failed to get container status \"fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a\": rpc error: code = NotFound desc = could not find container \"fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a\": container with ID starting with fc9976e2cca59b52453f60c425c65d3e1915f102de6fe787e7be662d457b122a not found: ID does not exist" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.402838 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.402870 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03510882-13a9-4490-841f-3704c415e49d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.402885 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzb2f\" (UniqueName: \"kubernetes.io/projected/03510882-13a9-4490-841f-3704c415e49d-kube-api-access-gzb2f\") on node \"crc\" DevicePath \"\"" Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.630199 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:02:02 crc kubenswrapper[4860]: I0121 22:02:02.640481 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqm2x"] Jan 21 22:02:04 crc kubenswrapper[4860]: I0121 22:02:04.581185 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:02:04 crc kubenswrapper[4860]: E0121 22:02:04.582240 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:02:04 crc kubenswrapper[4860]: I0121 22:02:04.594395 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03510882-13a9-4490-841f-3704c415e49d" path="/var/lib/kubelet/pods/03510882-13a9-4490-841f-3704c415e49d/volumes" Jan 21 22:02:17 crc kubenswrapper[4860]: I0121 22:02:17.579436 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:02:17 crc kubenswrapper[4860]: E0121 22:02:17.580464 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:02:28 crc kubenswrapper[4860]: I0121 22:02:28.585428 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:02:28 crc kubenswrapper[4860]: E0121 22:02:28.605754 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:02:42 crc kubenswrapper[4860]: I0121 22:02:42.580084 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:02:43 crc kubenswrapper[4860]: I0121 22:02:43.803793 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049"} Jan 21 22:05:02 crc kubenswrapper[4860]: I0121 22:05:02.104034 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:05:02 crc kubenswrapper[4860]: I0121 22:05:02.104671 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.025248 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:16 crc kubenswrapper[4860]: E0121 22:05:16.026367 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="extract-content" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.026387 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="extract-content" Jan 21 22:05:16 crc kubenswrapper[4860]: E0121 22:05:16.026412 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="extract-utilities" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.026420 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="extract-utilities" Jan 21 22:05:16 crc kubenswrapper[4860]: E0121 22:05:16.026458 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="registry-server" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.026464 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="registry-server" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.026662 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="03510882-13a9-4490-841f-3704c415e49d" containerName="registry-server" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.028019 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.049148 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.058330 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.058419 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.058531 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvd4l\" (UniqueName: \"kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.159830 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.159898 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.159982 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvd4l\" (UniqueName: \"kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.160499 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.160813 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.186626 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvd4l\" (UniqueName: \"kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l\") pod \"redhat-operators-9xf6q\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.354538 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:16 crc kubenswrapper[4860]: I0121 22:05:16.998131 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:17 crc kubenswrapper[4860]: I0121 22:05:17.886257 4860 generic.go:334] "Generic (PLEG): container finished" podID="54d83b70-328c-4555-ae51-35949ac9fe17" containerID="97d8efbf8e3b71cead69d1a092f7359d42fee6bad3cab11b5bcd6d9d31cf57c0" exitCode=0 Jan 21 22:05:17 crc kubenswrapper[4860]: I0121 22:05:17.886351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerDied","Data":"97d8efbf8e3b71cead69d1a092f7359d42fee6bad3cab11b5bcd6d9d31cf57c0"} Jan 21 22:05:17 crc kubenswrapper[4860]: I0121 22:05:17.887386 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerStarted","Data":"624d417022782e6aa903df4a9c8c4fbb052a390731af2103e84a2682649e4b7b"} Jan 21 22:05:19 crc kubenswrapper[4860]: I0121 22:05:19.910437 4860 generic.go:334] "Generic (PLEG): container finished" podID="54d83b70-328c-4555-ae51-35949ac9fe17" containerID="0076c3d6942fa3d9ef1380bd79f533d80ff5caa52a4373743ebc9e8ca3f3e057" exitCode=0 Jan 21 22:05:19 crc kubenswrapper[4860]: I0121 22:05:19.910599 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerDied","Data":"0076c3d6942fa3d9ef1380bd79f533d80ff5caa52a4373743ebc9e8ca3f3e057"} Jan 21 22:05:20 crc kubenswrapper[4860]: I0121 22:05:20.926518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerStarted","Data":"d9b089129966d1e677334f9894d5d550b19c1faf4b1f4112a27341b8667a7daf"} Jan 21 22:05:20 crc kubenswrapper[4860]: I0121 22:05:20.956818 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9xf6q" podStartSLOduration=3.513332255 podStartE2EDuration="5.956778999s" podCreationTimestamp="2026-01-21 22:05:15 +0000 UTC" firstStartedPulling="2026-01-21 22:05:17.888320119 +0000 UTC m=+3410.110498589" lastFinishedPulling="2026-01-21 22:05:20.331766863 +0000 UTC m=+3412.553945333" observedRunningTime="2026-01-21 22:05:20.950505245 +0000 UTC m=+3413.172683725" watchObservedRunningTime="2026-01-21 22:05:20.956778999 +0000 UTC m=+3413.178957469" Jan 21 22:05:26 crc kubenswrapper[4860]: I0121 22:05:26.355046 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:26 crc kubenswrapper[4860]: I0121 22:05:26.355886 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:27 crc kubenswrapper[4860]: I0121 22:05:27.406722 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9xf6q" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="registry-server" probeResult="failure" output=< Jan 21 22:05:27 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 22:05:27 crc kubenswrapper[4860]: > Jan 21 22:05:32 crc kubenswrapper[4860]: I0121 22:05:32.106672 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:05:32 crc kubenswrapper[4860]: I0121 22:05:32.107526 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:05:36 crc kubenswrapper[4860]: I0121 22:05:36.419733 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:36 crc kubenswrapper[4860]: I0121 22:05:36.469967 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:40 crc kubenswrapper[4860]: I0121 22:05:40.004713 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:40 crc kubenswrapper[4860]: I0121 22:05:40.005346 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9xf6q" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="registry-server" containerID="cri-o://d9b089129966d1e677334f9894d5d550b19c1faf4b1f4112a27341b8667a7daf" gracePeriod=2 Jan 21 22:05:41 crc kubenswrapper[4860]: I0121 22:05:41.119677 4860 generic.go:334] "Generic (PLEG): container finished" podID="54d83b70-328c-4555-ae51-35949ac9fe17" containerID="d9b089129966d1e677334f9894d5d550b19c1faf4b1f4112a27341b8667a7daf" exitCode=0 Jan 21 22:05:41 crc kubenswrapper[4860]: I0121 22:05:41.119826 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerDied","Data":"d9b089129966d1e677334f9894d5d550b19c1faf4b1f4112a27341b8667a7daf"} Jan 21 22:05:41 crc kubenswrapper[4860]: I0121 22:05:41.882511 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.025236 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvd4l\" (UniqueName: \"kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l\") pod \"54d83b70-328c-4555-ae51-35949ac9fe17\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.025376 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities\") pod \"54d83b70-328c-4555-ae51-35949ac9fe17\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.025399 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content\") pod \"54d83b70-328c-4555-ae51-35949ac9fe17\" (UID: \"54d83b70-328c-4555-ae51-35949ac9fe17\") " Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.027408 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities" (OuterVolumeSpecName: "utilities") pod "54d83b70-328c-4555-ae51-35949ac9fe17" (UID: "54d83b70-328c-4555-ae51-35949ac9fe17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.035946 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.037628 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l" (OuterVolumeSpecName: "kube-api-access-xvd4l") pod "54d83b70-328c-4555-ae51-35949ac9fe17" (UID: "54d83b70-328c-4555-ae51-35949ac9fe17"). InnerVolumeSpecName "kube-api-access-xvd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.134636 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xf6q" event={"ID":"54d83b70-328c-4555-ae51-35949ac9fe17","Type":"ContainerDied","Data":"624d417022782e6aa903df4a9c8c4fbb052a390731af2103e84a2682649e4b7b"} Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.134709 4860 scope.go:117] "RemoveContainer" containerID="d9b089129966d1e677334f9894d5d550b19c1faf4b1f4112a27341b8667a7daf" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.134851 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xf6q" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.137445 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvd4l\" (UniqueName: \"kubernetes.io/projected/54d83b70-328c-4555-ae51-35949ac9fe17-kube-api-access-xvd4l\") on node \"crc\" DevicePath \"\"" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.165270 4860 scope.go:117] "RemoveContainer" containerID="0076c3d6942fa3d9ef1380bd79f533d80ff5caa52a4373743ebc9e8ca3f3e057" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.173017 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54d83b70-328c-4555-ae51-35949ac9fe17" (UID: "54d83b70-328c-4555-ae51-35949ac9fe17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.195422 4860 scope.go:117] "RemoveContainer" containerID="97d8efbf8e3b71cead69d1a092f7359d42fee6bad3cab11b5bcd6d9d31cf57c0" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.239406 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d83b70-328c-4555-ae51-35949ac9fe17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.471390 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.478176 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9xf6q"] Jan 21 22:05:42 crc kubenswrapper[4860]: I0121 22:05:42.589592 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" path="/var/lib/kubelet/pods/54d83b70-328c-4555-ae51-35949ac9fe17/volumes" Jan 21 22:06:02 crc kubenswrapper[4860]: I0121 22:06:02.104180 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:06:02 crc kubenswrapper[4860]: I0121 22:06:02.104795 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:06:02 crc kubenswrapper[4860]: I0121 22:06:02.104865 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 22:06:02 crc kubenswrapper[4860]: I0121 22:06:02.106014 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 22:06:02 crc kubenswrapper[4860]: I0121 22:06:02.106089 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049" gracePeriod=600 Jan 21 22:06:03 crc kubenswrapper[4860]: I0121 22:06:03.378569 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049" exitCode=0 Jan 21 22:06:03 crc kubenswrapper[4860]: I0121 22:06:03.378661 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049"} Jan 21 22:06:03 crc kubenswrapper[4860]: I0121 22:06:03.379512 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066"} Jan 21 22:06:03 crc kubenswrapper[4860]: I0121 22:06:03.379551 4860 scope.go:117] "RemoveContainer" containerID="07c6cf60e155c7854bb4580fc734be6002035eb657c174b7b61b674cb073c768" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.822345 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:24 crc kubenswrapper[4860]: E0121 22:07:24.823698 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="registry-server" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.823722 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="registry-server" Jan 21 22:07:24 crc kubenswrapper[4860]: E0121 22:07:24.823753 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="extract-utilities" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.823760 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="extract-utilities" Jan 21 22:07:24 crc kubenswrapper[4860]: E0121 22:07:24.823785 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="extract-content" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.823793 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="extract-content" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.824009 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d83b70-328c-4555-ae51-35949ac9fe17" containerName="registry-server" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.825566 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.839522 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.995512 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.995942 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:24 crc kubenswrapper[4860]: I0121 22:07:24.995996 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7zb\" (UniqueName: \"kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.097488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.097620 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.097639 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p7zb\" (UniqueName: \"kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.098240 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.098342 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.125860 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p7zb\" (UniqueName: \"kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb\") pod \"redhat-marketplace-cnpg2\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.146200 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:25 crc kubenswrapper[4860]: I0121 22:07:25.685459 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:26 crc kubenswrapper[4860]: I0121 22:07:26.372818 4860 generic.go:334] "Generic (PLEG): container finished" podID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerID="2a540d4176243e03cfd258cc1a3a788878ecfa9d2099674231fa2b2f4d4c89d4" exitCode=0 Jan 21 22:07:26 crc kubenswrapper[4860]: I0121 22:07:26.372944 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerDied","Data":"2a540d4176243e03cfd258cc1a3a788878ecfa9d2099674231fa2b2f4d4c89d4"} Jan 21 22:07:26 crc kubenswrapper[4860]: I0121 22:07:26.375198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerStarted","Data":"62921a7858e93b208fb4a82283416c1da8fc2ed8c35517ac8cf86befc4174d27"} Jan 21 22:07:26 crc kubenswrapper[4860]: I0121 22:07:26.377410 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 22:07:27 crc kubenswrapper[4860]: I0121 22:07:27.394914 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerStarted","Data":"32a118f4fde4135eeb3a1ba36cc78ffdcd1bcfb7890dddda48218fe5c1c6beb3"} Jan 21 22:07:28 crc kubenswrapper[4860]: I0121 22:07:28.404393 4860 generic.go:334] "Generic (PLEG): container finished" podID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerID="32a118f4fde4135eeb3a1ba36cc78ffdcd1bcfb7890dddda48218fe5c1c6beb3" exitCode=0 Jan 21 22:07:28 crc kubenswrapper[4860]: I0121 22:07:28.404479 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerDied","Data":"32a118f4fde4135eeb3a1ba36cc78ffdcd1bcfb7890dddda48218fe5c1c6beb3"} Jan 21 22:07:29 crc kubenswrapper[4860]: I0121 22:07:29.422044 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerStarted","Data":"b21655da82ee0e14b12c0e1fff95ddd1bdf20b574997bbd8fdf42360051cd622"} Jan 21 22:07:29 crc kubenswrapper[4860]: I0121 22:07:29.470589 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnpg2" podStartSLOduration=3.040675528 podStartE2EDuration="5.470564272s" podCreationTimestamp="2026-01-21 22:07:24 +0000 UTC" firstStartedPulling="2026-01-21 22:07:26.375550373 +0000 UTC m=+3538.597728843" lastFinishedPulling="2026-01-21 22:07:28.805438807 +0000 UTC m=+3541.027617587" observedRunningTime="2026-01-21 22:07:29.469104707 +0000 UTC m=+3541.691283167" watchObservedRunningTime="2026-01-21 22:07:29.470564272 +0000 UTC m=+3541.692742742" Jan 21 22:07:35 crc kubenswrapper[4860]: I0121 22:07:35.146840 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:35 crc kubenswrapper[4860]: I0121 22:07:35.149159 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:35 crc kubenswrapper[4860]: I0121 22:07:35.208383 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:35 crc kubenswrapper[4860]: I0121 22:07:35.554229 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:39 crc kubenswrapper[4860]: I0121 22:07:39.407417 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:39 crc kubenswrapper[4860]: I0121 22:07:39.408258 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnpg2" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="registry-server" containerID="cri-o://b21655da82ee0e14b12c0e1fff95ddd1bdf20b574997bbd8fdf42360051cd622" gracePeriod=2 Jan 21 22:07:40 crc kubenswrapper[4860]: I0121 22:07:40.530713 4860 generic.go:334] "Generic (PLEG): container finished" podID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerID="b21655da82ee0e14b12c0e1fff95ddd1bdf20b574997bbd8fdf42360051cd622" exitCode=0 Jan 21 22:07:40 crc kubenswrapper[4860]: I0121 22:07:40.531053 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerDied","Data":"b21655da82ee0e14b12c0e1fff95ddd1bdf20b574997bbd8fdf42360051cd622"} Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.034009 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.142271 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content\") pod \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.142347 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities\") pod \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.142455 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p7zb\" (UniqueName: \"kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb\") pod \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\" (UID: \"ba7b6749-303a-4b6b-96a4-959a81d3af9b\") " Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.148509 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities" (OuterVolumeSpecName: "utilities") pod "ba7b6749-303a-4b6b-96a4-959a81d3af9b" (UID: "ba7b6749-303a-4b6b-96a4-959a81d3af9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.173825 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb" (OuterVolumeSpecName: "kube-api-access-8p7zb") pod "ba7b6749-303a-4b6b-96a4-959a81d3af9b" (UID: "ba7b6749-303a-4b6b-96a4-959a81d3af9b"). InnerVolumeSpecName "kube-api-access-8p7zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.186278 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba7b6749-303a-4b6b-96a4-959a81d3af9b" (UID: "ba7b6749-303a-4b6b-96a4-959a81d3af9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.244651 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.244694 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7b6749-303a-4b6b-96a4-959a81d3af9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.244710 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p7zb\" (UniqueName: \"kubernetes.io/projected/ba7b6749-303a-4b6b-96a4-959a81d3af9b-kube-api-access-8p7zb\") on node \"crc\" DevicePath \"\"" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.547350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnpg2" event={"ID":"ba7b6749-303a-4b6b-96a4-959a81d3af9b","Type":"ContainerDied","Data":"62921a7858e93b208fb4a82283416c1da8fc2ed8c35517ac8cf86befc4174d27"} Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.547978 4860 scope.go:117] "RemoveContainer" containerID="b21655da82ee0e14b12c0e1fff95ddd1bdf20b574997bbd8fdf42360051cd622" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.547429 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnpg2" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.586857 4860 scope.go:117] "RemoveContainer" containerID="32a118f4fde4135eeb3a1ba36cc78ffdcd1bcfb7890dddda48218fe5c1c6beb3" Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.602759 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.610205 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnpg2"] Jan 21 22:07:41 crc kubenswrapper[4860]: I0121 22:07:41.613250 4860 scope.go:117] "RemoveContainer" containerID="2a540d4176243e03cfd258cc1a3a788878ecfa9d2099674231fa2b2f4d4c89d4" Jan 21 22:07:42 crc kubenswrapper[4860]: I0121 22:07:42.591030 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" path="/var/lib/kubelet/pods/ba7b6749-303a-4b6b-96a4-959a81d3af9b/volumes" Jan 21 22:08:02 crc kubenswrapper[4860]: I0121 22:08:02.103574 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:08:02 crc kubenswrapper[4860]: I0121 22:08:02.104552 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:08:32 crc kubenswrapper[4860]: I0121 22:08:32.103658 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:08:32 crc kubenswrapper[4860]: I0121 22:08:32.104333 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:09:02 crc kubenswrapper[4860]: I0121 22:09:02.104026 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:09:02 crc kubenswrapper[4860]: I0121 22:09:02.104992 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:09:02 crc kubenswrapper[4860]: I0121 22:09:02.105129 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 22:09:02 crc kubenswrapper[4860]: I0121 22:09:02.106513 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 22:09:02 crc kubenswrapper[4860]: I0121 22:09:02.106646 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" gracePeriod=600 Jan 21 22:09:02 crc kubenswrapper[4860]: E0121 22:09:02.739916 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:09:03 crc kubenswrapper[4860]: I0121 22:09:03.355073 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" exitCode=0 Jan 21 22:09:03 crc kubenswrapper[4860]: I0121 22:09:03.355166 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066"} Jan 21 22:09:03 crc kubenswrapper[4860]: I0121 22:09:03.356170 4860 scope.go:117] "RemoveContainer" containerID="14f2633b9cc4a2148d9772cbbc1421c0f6cc99bdd72050eb4ca3378394bd4049" Jan 21 22:09:03 crc kubenswrapper[4860]: I0121 22:09:03.357134 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:09:03 crc kubenswrapper[4860]: E0121 22:09:03.357512 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:09:15 crc kubenswrapper[4860]: I0121 22:09:15.579221 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:09:15 crc kubenswrapper[4860]: E0121 22:09:15.581071 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:09:26 crc kubenswrapper[4860]: I0121 22:09:26.580087 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:09:26 crc kubenswrapper[4860]: E0121 22:09:26.581847 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:09:38 crc kubenswrapper[4860]: I0121 22:09:38.591191 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:09:38 crc kubenswrapper[4860]: E0121 22:09:38.592498 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:09:53 crc kubenswrapper[4860]: I0121 22:09:53.579648 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:09:53 crc kubenswrapper[4860]: E0121 22:09:53.580464 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:10:06 crc kubenswrapper[4860]: I0121 22:10:06.579693 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:10:06 crc kubenswrapper[4860]: E0121 22:10:06.580796 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:10:18 crc kubenswrapper[4860]: I0121 22:10:18.589867 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:10:18 crc kubenswrapper[4860]: E0121 22:10:18.591534 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:10:30 crc kubenswrapper[4860]: I0121 22:10:30.578693 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:10:30 crc kubenswrapper[4860]: E0121 22:10:30.579839 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:10:42 crc kubenswrapper[4860]: I0121 22:10:42.581029 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:10:42 crc kubenswrapper[4860]: E0121 22:10:42.582440 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:10:57 crc kubenswrapper[4860]: I0121 22:10:57.580957 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:10:57 crc kubenswrapper[4860]: E0121 22:10:57.582355 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:11:10 crc kubenswrapper[4860]: I0121 22:11:10.579838 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:11:10 crc kubenswrapper[4860]: E0121 22:11:10.581571 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:11:24 crc kubenswrapper[4860]: I0121 22:11:24.581015 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:11:24 crc kubenswrapper[4860]: E0121 22:11:24.581942 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:11:35 crc kubenswrapper[4860]: I0121 22:11:35.579020 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:11:35 crc kubenswrapper[4860]: E0121 22:11:35.580101 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:11:48 crc kubenswrapper[4860]: I0121 22:11:48.582996 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:11:48 crc kubenswrapper[4860]: E0121 22:11:48.585338 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:12:03 crc kubenswrapper[4860]: I0121 22:12:03.580848 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:12:03 crc kubenswrapper[4860]: E0121 22:12:03.581759 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:12:15 crc kubenswrapper[4860]: I0121 22:12:15.579877 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:12:15 crc kubenswrapper[4860]: E0121 22:12:15.581108 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:12:30 crc kubenswrapper[4860]: I0121 22:12:30.579521 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:12:30 crc kubenswrapper[4860]: E0121 22:12:30.580571 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:12:43 crc kubenswrapper[4860]: I0121 22:12:43.579260 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:12:43 crc kubenswrapper[4860]: E0121 22:12:43.580224 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:12:57 crc kubenswrapper[4860]: I0121 22:12:57.578691 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:12:57 crc kubenswrapper[4860]: E0121 22:12:57.579760 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:13:12 crc kubenswrapper[4860]: I0121 22:13:12.585313 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:13:12 crc kubenswrapper[4860]: E0121 22:13:12.586289 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:13:23 crc kubenswrapper[4860]: I0121 22:13:23.579170 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:13:23 crc kubenswrapper[4860]: E0121 22:13:23.580725 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:13:38 crc kubenswrapper[4860]: I0121 22:13:38.584581 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:13:38 crc kubenswrapper[4860]: E0121 22:13:38.585583 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:13:51 crc kubenswrapper[4860]: I0121 22:13:51.579262 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:13:51 crc kubenswrapper[4860]: E0121 22:13:51.580087 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:14:05 crc kubenswrapper[4860]: I0121 22:14:05.579654 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:14:06 crc kubenswrapper[4860]: I0121 22:14:06.657758 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e"} Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.207653 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr"] Jan 21 22:15:00 crc kubenswrapper[4860]: E0121 22:15:00.209301 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="extract-utilities" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.209328 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="extract-utilities" Jan 21 22:15:00 crc kubenswrapper[4860]: E0121 22:15:00.209378 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="extract-content" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.209385 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="extract-content" Jan 21 22:15:00 crc kubenswrapper[4860]: E0121 22:15:00.209401 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="registry-server" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.209411 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="registry-server" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.209668 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7b6749-303a-4b6b-96a4-959a81d3af9b" containerName="registry-server" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.210874 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.214852 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.215659 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.216064 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.216206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtv8p\" (UniqueName: \"kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.217072 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.234017 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr"] Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.318670 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.319039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.319262 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtv8p\" (UniqueName: \"kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.320507 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.329548 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.343027 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtv8p\" (UniqueName: \"kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p\") pod \"collect-profiles-29483895-m4jlr\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:00 crc kubenswrapper[4860]: I0121 22:15:00.534134 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:01 crc kubenswrapper[4860]: I0121 22:15:01.472762 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr"] Jan 21 22:15:02 crc kubenswrapper[4860]: I0121 22:15:02.278270 4860 generic.go:334] "Generic (PLEG): container finished" podID="db2f0b22-8235-43f8-9264-56794e617aa8" containerID="fa848e10090c7596c54b5d218aa2ed87865ddc5c6f9053b479883d3b8ac44c92" exitCode=0 Jan 21 22:15:02 crc kubenswrapper[4860]: I0121 22:15:02.278396 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" event={"ID":"db2f0b22-8235-43f8-9264-56794e617aa8","Type":"ContainerDied","Data":"fa848e10090c7596c54b5d218aa2ed87865ddc5c6f9053b479883d3b8ac44c92"} Jan 21 22:15:02 crc kubenswrapper[4860]: I0121 22:15:02.278754 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" event={"ID":"db2f0b22-8235-43f8-9264-56794e617aa8","Type":"ContainerStarted","Data":"93ce140f9c2448fae74eaa03044914fc59f66771d514656e416b935736c20764"} Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.861393 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.929372 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtv8p\" (UniqueName: \"kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p\") pod \"db2f0b22-8235-43f8-9264-56794e617aa8\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.929488 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume\") pod \"db2f0b22-8235-43f8-9264-56794e617aa8\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.929528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume\") pod \"db2f0b22-8235-43f8-9264-56794e617aa8\" (UID: \"db2f0b22-8235-43f8-9264-56794e617aa8\") " Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.930813 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume" (OuterVolumeSpecName: "config-volume") pod "db2f0b22-8235-43f8-9264-56794e617aa8" (UID: "db2f0b22-8235-43f8-9264-56794e617aa8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.937361 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db2f0b22-8235-43f8-9264-56794e617aa8" (UID: "db2f0b22-8235-43f8-9264-56794e617aa8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 22:15:03 crc kubenswrapper[4860]: I0121 22:15:03.951549 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p" (OuterVolumeSpecName: "kube-api-access-vtv8p") pod "db2f0b22-8235-43f8-9264-56794e617aa8" (UID: "db2f0b22-8235-43f8-9264-56794e617aa8"). InnerVolumeSpecName "kube-api-access-vtv8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.031392 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtv8p\" (UniqueName: \"kubernetes.io/projected/db2f0b22-8235-43f8-9264-56794e617aa8-kube-api-access-vtv8p\") on node \"crc\" DevicePath \"\"" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.031770 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2f0b22-8235-43f8-9264-56794e617aa8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.031833 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f0b22-8235-43f8-9264-56794e617aa8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.309911 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" event={"ID":"db2f0b22-8235-43f8-9264-56794e617aa8","Type":"ContainerDied","Data":"93ce140f9c2448fae74eaa03044914fc59f66771d514656e416b935736c20764"} Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.309978 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ce140f9c2448fae74eaa03044914fc59f66771d514656e416b935736c20764" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.310013 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483895-m4jlr" Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.965230 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk"] Jan 21 22:15:04 crc kubenswrapper[4860]: I0121 22:15:04.973332 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483850-bv5hk"] Jan 21 22:15:06 crc kubenswrapper[4860]: I0121 22:15:06.592222 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f" path="/var/lib/kubelet/pods/5e2d204d-f5ea-48b7-a0e2-6e6c6c783b5f/volumes" Jan 21 22:15:43 crc kubenswrapper[4860]: I0121 22:15:43.812211 4860 scope.go:117] "RemoveContainer" containerID="d7f302a045eb40e1013225bd03e1fbeab7054b9e52dca4de95ed4d387bcc74bc" Jan 21 22:16:32 crc kubenswrapper[4860]: I0121 22:16:32.103871 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:16:32 crc kubenswrapper[4860]: I0121 22:16:32.104817 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.430752 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:16:36 crc kubenswrapper[4860]: E0121 22:16:36.432348 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2f0b22-8235-43f8-9264-56794e617aa8" containerName="collect-profiles" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.432373 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2f0b22-8235-43f8-9264-56794e617aa8" containerName="collect-profiles" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.432664 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2f0b22-8235-43f8-9264-56794e617aa8" containerName="collect-profiles" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.434635 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.458129 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.625913 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hnp\" (UniqueName: \"kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.626042 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.626236 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.728198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.728329 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hnp\" (UniqueName: \"kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.728372 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.729276 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.729348 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:36 crc kubenswrapper[4860]: I0121 22:16:36.758362 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hnp\" (UniqueName: \"kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp\") pod \"redhat-operators-m787q\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:37 crc kubenswrapper[4860]: I0121 22:16:37.057153 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:50 crc kubenswrapper[4860]: I0121 22:16:50.929322 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:16:51 crc kubenswrapper[4860]: I0121 22:16:51.849790 4860 generic.go:334] "Generic (PLEG): container finished" podID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerID="67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22" exitCode=0 Jan 21 22:16:51 crc kubenswrapper[4860]: I0121 22:16:51.849920 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerDied","Data":"67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22"} Jan 21 22:16:51 crc kubenswrapper[4860]: I0121 22:16:51.851030 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerStarted","Data":"8fc7d3b349ba3776bd5e9b1e4d7449ab08cee88cc86c4cc6b04987df23afca36"} Jan 21 22:16:51 crc kubenswrapper[4860]: I0121 22:16:51.853889 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 22:16:52 crc kubenswrapper[4860]: I0121 22:16:52.864011 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerStarted","Data":"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226"} Jan 21 22:16:53 crc kubenswrapper[4860]: I0121 22:16:53.876295 4860 generic.go:334] "Generic (PLEG): container finished" podID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerID="24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226" exitCode=0 Jan 21 22:16:53 crc kubenswrapper[4860]: I0121 22:16:53.876362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerDied","Data":"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226"} Jan 21 22:16:55 crc kubenswrapper[4860]: I0121 22:16:55.910562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerStarted","Data":"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff"} Jan 21 22:16:55 crc kubenswrapper[4860]: I0121 22:16:55.942199 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m787q" podStartSLOduration=17.501101640999998 podStartE2EDuration="19.942131547s" podCreationTimestamp="2026-01-21 22:16:36 +0000 UTC" firstStartedPulling="2026-01-21 22:16:51.853122036 +0000 UTC m=+4104.075300546" lastFinishedPulling="2026-01-21 22:16:54.294151982 +0000 UTC m=+4106.516330452" observedRunningTime="2026-01-21 22:16:55.940857307 +0000 UTC m=+4108.163035807" watchObservedRunningTime="2026-01-21 22:16:55.942131547 +0000 UTC m=+4108.164310027" Jan 21 22:16:57 crc kubenswrapper[4860]: I0121 22:16:57.057848 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:57 crc kubenswrapper[4860]: I0121 22:16:57.058103 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:16:58 crc kubenswrapper[4860]: I0121 22:16:58.137455 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m787q" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="registry-server" probeResult="failure" output=< Jan 21 22:16:58 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 22:16:58 crc kubenswrapper[4860]: > Jan 21 22:17:02 crc kubenswrapper[4860]: I0121 22:17:02.104215 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:17:02 crc kubenswrapper[4860]: I0121 22:17:02.104874 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:17:07 crc kubenswrapper[4860]: I0121 22:17:07.141409 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:17:07 crc kubenswrapper[4860]: I0121 22:17:07.226175 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.024432 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.025678 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m787q" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="registry-server" containerID="cri-o://2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff" gracePeriod=2 Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.563154 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.664608 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48hnp\" (UniqueName: \"kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp\") pod \"4b236373-24b4-4b1d-baaf-b111ef3b2281\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.664710 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content\") pod \"4b236373-24b4-4b1d-baaf-b111ef3b2281\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.664805 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities\") pod \"4b236373-24b4-4b1d-baaf-b111ef3b2281\" (UID: \"4b236373-24b4-4b1d-baaf-b111ef3b2281\") " Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.666505 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities" (OuterVolumeSpecName: "utilities") pod "4b236373-24b4-4b1d-baaf-b111ef3b2281" (UID: "4b236373-24b4-4b1d-baaf-b111ef3b2281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.684272 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp" (OuterVolumeSpecName: "kube-api-access-48hnp") pod "4b236373-24b4-4b1d-baaf-b111ef3b2281" (UID: "4b236373-24b4-4b1d-baaf-b111ef3b2281"). InnerVolumeSpecName "kube-api-access-48hnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.767753 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48hnp\" (UniqueName: \"kubernetes.io/projected/4b236373-24b4-4b1d-baaf-b111ef3b2281-kube-api-access-48hnp\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.767822 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.817765 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b236373-24b4-4b1d-baaf-b111ef3b2281" (UID: "4b236373-24b4-4b1d-baaf-b111ef3b2281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:10 crc kubenswrapper[4860]: I0121 22:17:10.870333 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b236373-24b4-4b1d-baaf-b111ef3b2281-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.074307 4860 generic.go:334] "Generic (PLEG): container finished" podID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerID="2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff" exitCode=0 Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.074386 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerDied","Data":"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff"} Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.074435 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m787q" event={"ID":"4b236373-24b4-4b1d-baaf-b111ef3b2281","Type":"ContainerDied","Data":"8fc7d3b349ba3776bd5e9b1e4d7449ab08cee88cc86c4cc6b04987df23afca36"} Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.074471 4860 scope.go:117] "RemoveContainer" containerID="2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.074755 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m787q" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.103658 4860 scope.go:117] "RemoveContainer" containerID="24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.148469 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.157959 4860 scope.go:117] "RemoveContainer" containerID="67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.161264 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m787q"] Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.183916 4860 scope.go:117] "RemoveContainer" containerID="2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff" Jan 21 22:17:11 crc kubenswrapper[4860]: E0121 22:17:11.184691 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff\": container with ID starting with 2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff not found: ID does not exist" containerID="2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.184760 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff"} err="failed to get container status \"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff\": rpc error: code = NotFound desc = could not find container \"2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff\": container with ID starting with 2c87418f74cd1579a1b56d229595e7b0433540ba0a617a994112332f9d691dff not found: ID does not exist" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.184803 4860 scope.go:117] "RemoveContainer" containerID="24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226" Jan 21 22:17:11 crc kubenswrapper[4860]: E0121 22:17:11.185405 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226\": container with ID starting with 24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226 not found: ID does not exist" containerID="24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.185480 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226"} err="failed to get container status \"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226\": rpc error: code = NotFound desc = could not find container \"24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226\": container with ID starting with 24b3b041e084132529d70d65d9863bee7b749cd5a4ede23e17e17f594f540226 not found: ID does not exist" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.185532 4860 scope.go:117] "RemoveContainer" containerID="67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22" Jan 21 22:17:11 crc kubenswrapper[4860]: E0121 22:17:11.186234 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22\": container with ID starting with 67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22 not found: ID does not exist" containerID="67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22" Jan 21 22:17:11 crc kubenswrapper[4860]: I0121 22:17:11.186287 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22"} err="failed to get container status \"67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22\": rpc error: code = NotFound desc = could not find container \"67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22\": container with ID starting with 67b4645515aaadb3441159f72382cdb684a7f88ce86203d0dd090098ed721e22 not found: ID does not exist" Jan 21 22:17:12 crc kubenswrapper[4860]: I0121 22:17:12.594208 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" path="/var/lib/kubelet/pods/4b236373-24b4-4b1d-baaf-b111ef3b2281/volumes" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.823867 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:19 crc kubenswrapper[4860]: E0121 22:17:19.825839 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="registry-server" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.825858 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="registry-server" Jan 21 22:17:19 crc kubenswrapper[4860]: E0121 22:17:19.825879 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="extract-content" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.825885 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="extract-content" Jan 21 22:17:19 crc kubenswrapper[4860]: E0121 22:17:19.825898 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="extract-utilities" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.825905 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="extract-utilities" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.826113 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b236373-24b4-4b1d-baaf-b111ef3b2281" containerName="registry-server" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.827467 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.845639 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.905387 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.905648 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:19 crc kubenswrapper[4860]: I0121 22:17:19.905847 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfpb\" (UniqueName: \"kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.008194 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.008293 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.008370 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfpb\" (UniqueName: \"kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.008866 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.008973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.041980 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfpb\" (UniqueName: \"kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb\") pod \"community-operators-9zrjt\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.155687 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:20 crc kubenswrapper[4860]: I0121 22:17:20.693123 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:21 crc kubenswrapper[4860]: I0121 22:17:21.179013 4860 generic.go:334] "Generic (PLEG): container finished" podID="9e099029-38de-4927-8846-211306e67ca3" containerID="6c8751492c55735c1f17ab3c46a428eec58eeb8f9239cdd5d16986d285481eb8" exitCode=0 Jan 21 22:17:21 crc kubenswrapper[4860]: I0121 22:17:21.179432 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerDied","Data":"6c8751492c55735c1f17ab3c46a428eec58eeb8f9239cdd5d16986d285481eb8"} Jan 21 22:17:21 crc kubenswrapper[4860]: I0121 22:17:21.179464 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerStarted","Data":"22c8e46aa4e63e7bd0cdef66a016bc6397d780d2594436f729c3ff74e415401a"} Jan 21 22:17:23 crc kubenswrapper[4860]: I0121 22:17:23.203878 4860 generic.go:334] "Generic (PLEG): container finished" podID="9e099029-38de-4927-8846-211306e67ca3" containerID="799c368d74819aeb0f78f0716e5e34da463fa818fcef561bd9b25ff10473c859" exitCode=0 Jan 21 22:17:23 crc kubenswrapper[4860]: I0121 22:17:23.203964 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerDied","Data":"799c368d74819aeb0f78f0716e5e34da463fa818fcef561bd9b25ff10473c859"} Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.217016 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerStarted","Data":"d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c"} Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.243300 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9zrjt" podStartSLOduration=2.8084862839999998 podStartE2EDuration="5.243278502s" podCreationTimestamp="2026-01-21 22:17:19 +0000 UTC" firstStartedPulling="2026-01-21 22:17:21.181123637 +0000 UTC m=+4133.403302107" lastFinishedPulling="2026-01-21 22:17:23.615915855 +0000 UTC m=+4135.838094325" observedRunningTime="2026-01-21 22:17:24.241362563 +0000 UTC m=+4136.463541023" watchObservedRunningTime="2026-01-21 22:17:24.243278502 +0000 UTC m=+4136.465456972" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.615805 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.617756 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.619835 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.619917 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.620005 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk68\" (UniqueName: \"kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.627511 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.721806 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.722116 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.722420 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crk68\" (UniqueName: \"kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.722775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.723251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.752902 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crk68\" (UniqueName: \"kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68\") pod \"certified-operators-j8jqx\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:24 crc kubenswrapper[4860]: I0121 22:17:24.944763 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:25 crc kubenswrapper[4860]: I0121 22:17:25.456688 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:26 crc kubenswrapper[4860]: I0121 22:17:26.237185 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerID="e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e" exitCode=0 Jan 21 22:17:26 crc kubenswrapper[4860]: I0121 22:17:26.237262 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerDied","Data":"e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e"} Jan 21 22:17:26 crc kubenswrapper[4860]: I0121 22:17:26.237305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerStarted","Data":"6aca792a231ad65843bc8456bef6c148269a42f9cbeb33081221ceec958432f6"} Jan 21 22:17:28 crc kubenswrapper[4860]: I0121 22:17:28.257539 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerStarted","Data":"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1"} Jan 21 22:17:29 crc kubenswrapper[4860]: I0121 22:17:29.270318 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerID="47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1" exitCode=0 Jan 21 22:17:29 crc kubenswrapper[4860]: I0121 22:17:29.270370 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerDied","Data":"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1"} Jan 21 22:17:30 crc kubenswrapper[4860]: I0121 22:17:30.155975 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:30 crc kubenswrapper[4860]: I0121 22:17:30.160870 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:30 crc kubenswrapper[4860]: I0121 22:17:30.239358 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:30 crc kubenswrapper[4860]: I0121 22:17:30.354612 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:31 crc kubenswrapper[4860]: I0121 22:17:31.296562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerStarted","Data":"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad"} Jan 21 22:17:31 crc kubenswrapper[4860]: I0121 22:17:31.323288 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8jqx" podStartSLOduration=3.833400499 podStartE2EDuration="7.323259399s" podCreationTimestamp="2026-01-21 22:17:24 +0000 UTC" firstStartedPulling="2026-01-21 22:17:26.24048551 +0000 UTC m=+4138.462664000" lastFinishedPulling="2026-01-21 22:17:29.73034443 +0000 UTC m=+4141.952522900" observedRunningTime="2026-01-21 22:17:31.321103452 +0000 UTC m=+4143.543281942" watchObservedRunningTime="2026-01-21 22:17:31.323259399 +0000 UTC m=+4143.545437889" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.016874 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.018814 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.039745 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.103705 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.103801 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.103873 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.104842 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.104908 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e" gracePeriod=600 Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.196491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.196710 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj74b\" (UniqueName: \"kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.196758 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.298342 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.298512 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.298585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj74b\" (UniqueName: \"kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.299263 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.300953 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.315179 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e" exitCode=0 Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.315256 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e"} Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.315800 4860 scope.go:117] "RemoveContainer" containerID="65125bbd566cabdc90efa6f78b7135095220f3ea0056387b833b3d464eb06066" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.323543 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj74b\" (UniqueName: \"kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b\") pod \"redhat-marketplace-5zh5t\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.346576 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:32 crc kubenswrapper[4860]: I0121 22:17:32.954122 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:33 crc kubenswrapper[4860]: I0121 22:17:33.333360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128"} Jan 21 22:17:33 crc kubenswrapper[4860]: I0121 22:17:33.338048 4860 generic.go:334] "Generic (PLEG): container finished" podID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerID="4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548" exitCode=0 Jan 21 22:17:33 crc kubenswrapper[4860]: I0121 22:17:33.338089 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerDied","Data":"4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548"} Jan 21 22:17:33 crc kubenswrapper[4860]: I0121 22:17:33.338304 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerStarted","Data":"a4e38a6bc673ea9e9f691b29ff4ba95115b6faf8d1949bf6f794c544bafe8c31"} Jan 21 22:17:34 crc kubenswrapper[4860]: I0121 22:17:34.945619 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:34 crc kubenswrapper[4860]: I0121 22:17:34.946534 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:35 crc kubenswrapper[4860]: I0121 22:17:35.062385 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:35 crc kubenswrapper[4860]: I0121 22:17:35.359471 4860 generic.go:334] "Generic (PLEG): container finished" podID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerID="c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67" exitCode=0 Jan 21 22:17:35 crc kubenswrapper[4860]: I0121 22:17:35.359612 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerDied","Data":"c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67"} Jan 21 22:17:35 crc kubenswrapper[4860]: I0121 22:17:35.429595 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:36 crc kubenswrapper[4860]: I0121 22:17:36.379063 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerStarted","Data":"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb"} Jan 21 22:17:37 crc kubenswrapper[4860]: I0121 22:17:37.012302 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5zh5t" podStartSLOduration=3.522966597 podStartE2EDuration="6.012277272s" podCreationTimestamp="2026-01-21 22:17:31 +0000 UTC" firstStartedPulling="2026-01-21 22:17:33.34028505 +0000 UTC m=+4145.562463520" lastFinishedPulling="2026-01-21 22:17:35.829595705 +0000 UTC m=+4148.051774195" observedRunningTime="2026-01-21 22:17:36.406971957 +0000 UTC m=+4148.629150457" watchObservedRunningTime="2026-01-21 22:17:37.012277272 +0000 UTC m=+4149.234455742" Jan 21 22:17:37 crc kubenswrapper[4860]: I0121 22:17:37.017814 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:37 crc kubenswrapper[4860]: I0121 22:17:37.390668 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8jqx" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="registry-server" containerID="cri-o://fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad" gracePeriod=2 Jan 21 22:17:37 crc kubenswrapper[4860]: I0121 22:17:37.910762 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.054440 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities\") pod \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.054508 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content\") pod \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.054827 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crk68\" (UniqueName: \"kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68\") pod \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\" (UID: \"4ca6b0c5-b760-4151-9b2e-447d8c2b631d\") " Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.055822 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities" (OuterVolumeSpecName: "utilities") pod "4ca6b0c5-b760-4151-9b2e-447d8c2b631d" (UID: "4ca6b0c5-b760-4151-9b2e-447d8c2b631d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.076315 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68" (OuterVolumeSpecName: "kube-api-access-crk68") pod "4ca6b0c5-b760-4151-9b2e-447d8c2b631d" (UID: "4ca6b0c5-b760-4151-9b2e-447d8c2b631d"). InnerVolumeSpecName "kube-api-access-crk68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.117607 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ca6b0c5-b760-4151-9b2e-447d8c2b631d" (UID: "4ca6b0c5-b760-4151-9b2e-447d8c2b631d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.156761 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.156829 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.156842 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crk68\" (UniqueName: \"kubernetes.io/projected/4ca6b0c5-b760-4151-9b2e-447d8c2b631d-kube-api-access-crk68\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.405985 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerID="fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad" exitCode=0 Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.406058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerDied","Data":"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad"} Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.406114 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8jqx" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.406145 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8jqx" event={"ID":"4ca6b0c5-b760-4151-9b2e-447d8c2b631d","Type":"ContainerDied","Data":"6aca792a231ad65843bc8456bef6c148269a42f9cbeb33081221ceec958432f6"} Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.406265 4860 scope.go:117] "RemoveContainer" containerID="fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.444386 4860 scope.go:117] "RemoveContainer" containerID="47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.451063 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.462058 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8jqx"] Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.475081 4860 scope.go:117] "RemoveContainer" containerID="e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.513417 4860 scope.go:117] "RemoveContainer" containerID="fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad" Jan 21 22:17:38 crc kubenswrapper[4860]: E0121 22:17:38.514039 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad\": container with ID starting with fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad not found: ID does not exist" containerID="fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.514117 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad"} err="failed to get container status \"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad\": rpc error: code = NotFound desc = could not find container \"fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad\": container with ID starting with fdb0c2e5ad35fcbed600e832b99294f17c195b29a016944cd1594c8807af71ad not found: ID does not exist" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.514165 4860 scope.go:117] "RemoveContainer" containerID="47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1" Jan 21 22:17:38 crc kubenswrapper[4860]: E0121 22:17:38.514804 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1\": container with ID starting with 47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1 not found: ID does not exist" containerID="47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.514835 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1"} err="failed to get container status \"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1\": rpc error: code = NotFound desc = could not find container \"47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1\": container with ID starting with 47d38d6a9294894a43349a0a19b6b47f58c3485fdc1936a191682157674822c1 not found: ID does not exist" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.514854 4860 scope.go:117] "RemoveContainer" containerID="e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e" Jan 21 22:17:38 crc kubenswrapper[4860]: E0121 22:17:38.515314 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e\": container with ID starting with e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e not found: ID does not exist" containerID="e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.515344 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e"} err="failed to get container status \"e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e\": rpc error: code = NotFound desc = could not find container \"e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e\": container with ID starting with e9051d1b1174dfad972fa7d130f1fc3c20b0f6c8af9a1d59bdcbca9f2a69146e not found: ID does not exist" Jan 21 22:17:38 crc kubenswrapper[4860]: I0121 22:17:38.609757 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" path="/var/lib/kubelet/pods/4ca6b0c5-b760-4151-9b2e-447d8c2b631d/volumes" Jan 21 22:17:39 crc kubenswrapper[4860]: I0121 22:17:39.608522 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:39 crc kubenswrapper[4860]: I0121 22:17:39.608968 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9zrjt" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="registry-server" containerID="cri-o://d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" gracePeriod=2 Jan 21 22:17:40 crc kubenswrapper[4860]: E0121 22:17:40.156690 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c is running failed: container process not found" containerID="d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 22:17:40 crc kubenswrapper[4860]: E0121 22:17:40.157564 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c is running failed: container process not found" containerID="d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 22:17:40 crc kubenswrapper[4860]: E0121 22:17:40.158605 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c is running failed: container process not found" containerID="d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 22:17:40 crc kubenswrapper[4860]: E0121 22:17:40.158727 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9zrjt" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="registry-server" Jan 21 22:17:40 crc kubenswrapper[4860]: I0121 22:17:40.434395 4860 generic.go:334] "Generic (PLEG): container finished" podID="9e099029-38de-4927-8846-211306e67ca3" containerID="d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" exitCode=0 Jan 21 22:17:40 crc kubenswrapper[4860]: I0121 22:17:40.434459 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerDied","Data":"d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c"} Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.241599 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.254985 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities" (OuterVolumeSpecName: "utilities") pod "9e099029-38de-4927-8846-211306e67ca3" (UID: "9e099029-38de-4927-8846-211306e67ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.251840 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities\") pod \"9e099029-38de-4927-8846-211306e67ca3\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.255423 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content\") pod \"9e099029-38de-4927-8846-211306e67ca3\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.260306 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfpb\" (UniqueName: \"kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb\") pod \"9e099029-38de-4927-8846-211306e67ca3\" (UID: \"9e099029-38de-4927-8846-211306e67ca3\") " Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.265494 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.270549 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb" (OuterVolumeSpecName: "kube-api-access-kdfpb") pod "9e099029-38de-4927-8846-211306e67ca3" (UID: "9e099029-38de-4927-8846-211306e67ca3"). InnerVolumeSpecName "kube-api-access-kdfpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.341251 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e099029-38de-4927-8846-211306e67ca3" (UID: "9e099029-38de-4927-8846-211306e67ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.347881 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.347948 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.367982 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e099029-38de-4927-8846-211306e67ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.368027 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdfpb\" (UniqueName: \"kubernetes.io/projected/9e099029-38de-4927-8846-211306e67ca3-kube-api-access-kdfpb\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.409273 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.466052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zrjt" event={"ID":"9e099029-38de-4927-8846-211306e67ca3","Type":"ContainerDied","Data":"22c8e46aa4e63e7bd0cdef66a016bc6397d780d2594436f729c3ff74e415401a"} Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.466146 4860 scope.go:117] "RemoveContainer" containerID="d71cc9751909dae66d33902d8d93f75dd26a93e07595f41949c66f484d49054c" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.466250 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zrjt" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.491526 4860 scope.go:117] "RemoveContainer" containerID="799c368d74819aeb0f78f0716e5e34da463fa818fcef561bd9b25ff10473c859" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.513446 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.526875 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9zrjt"] Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.526926 4860 scope.go:117] "RemoveContainer" containerID="6c8751492c55735c1f17ab3c46a428eec58eeb8f9239cdd5d16986d285481eb8" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.529442 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:42 crc kubenswrapper[4860]: I0121 22:17:42.597104 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e099029-38de-4927-8846-211306e67ca3" path="/var/lib/kubelet/pods/9e099029-38de-4927-8846-211306e67ca3/volumes" Jan 21 22:17:47 crc kubenswrapper[4860]: I0121 22:17:47.205341 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:47 crc kubenswrapper[4860]: I0121 22:17:47.206280 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5zh5t" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="registry-server" containerID="cri-o://cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb" gracePeriod=2 Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.301553 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.378279 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities\") pod \"506cae55-bed3-4fc6-8748-0892d82d1f2f\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.378602 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj74b\" (UniqueName: \"kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b\") pod \"506cae55-bed3-4fc6-8748-0892d82d1f2f\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.378747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content\") pod \"506cae55-bed3-4fc6-8748-0892d82d1f2f\" (UID: \"506cae55-bed3-4fc6-8748-0892d82d1f2f\") " Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.380005 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities" (OuterVolumeSpecName: "utilities") pod "506cae55-bed3-4fc6-8748-0892d82d1f2f" (UID: "506cae55-bed3-4fc6-8748-0892d82d1f2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.386315 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b" (OuterVolumeSpecName: "kube-api-access-jj74b") pod "506cae55-bed3-4fc6-8748-0892d82d1f2f" (UID: "506cae55-bed3-4fc6-8748-0892d82d1f2f"). InnerVolumeSpecName "kube-api-access-jj74b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.410784 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "506cae55-bed3-4fc6-8748-0892d82d1f2f" (UID: "506cae55-bed3-4fc6-8748-0892d82d1f2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.480087 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.480390 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj74b\" (UniqueName: \"kubernetes.io/projected/506cae55-bed3-4fc6-8748-0892d82d1f2f-kube-api-access-jj74b\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.480405 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506cae55-bed3-4fc6-8748-0892d82d1f2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.539810 4860 generic.go:334] "Generic (PLEG): container finished" podID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerID="cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb" exitCode=0 Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.539867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerDied","Data":"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb"} Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.539900 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zh5t" event={"ID":"506cae55-bed3-4fc6-8748-0892d82d1f2f","Type":"ContainerDied","Data":"a4e38a6bc673ea9e9f691b29ff4ba95115b6faf8d1949bf6f794c544bafe8c31"} Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.539921 4860 scope.go:117] "RemoveContainer" containerID="cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.540142 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zh5t" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.577980 4860 scope.go:117] "RemoveContainer" containerID="c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.606972 4860 scope.go:117] "RemoveContainer" containerID="4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.611497 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.611560 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zh5t"] Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.650064 4860 scope.go:117] "RemoveContainer" containerID="cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb" Jan 21 22:17:48 crc kubenswrapper[4860]: E0121 22:17:48.650820 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb\": container with ID starting with cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb not found: ID does not exist" containerID="cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.650948 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb"} err="failed to get container status \"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb\": rpc error: code = NotFound desc = could not find container \"cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb\": container with ID starting with cd9f490de1b0f67a84c5821e9cb6235b7ef4eac525b82d93c1910a87a65a64fb not found: ID does not exist" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.651056 4860 scope.go:117] "RemoveContainer" containerID="c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67" Jan 21 22:17:48 crc kubenswrapper[4860]: E0121 22:17:48.652907 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67\": container with ID starting with c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67 not found: ID does not exist" containerID="c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.653027 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67"} err="failed to get container status \"c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67\": rpc error: code = NotFound desc = could not find container \"c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67\": container with ID starting with c06f0e949e527c372817a5d496e038f3762604910128673b3f73621608c27c67 not found: ID does not exist" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.653113 4860 scope.go:117] "RemoveContainer" containerID="4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548" Jan 21 22:17:48 crc kubenswrapper[4860]: E0121 22:17:48.654143 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548\": container with ID starting with 4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548 not found: ID does not exist" containerID="4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548" Jan 21 22:17:48 crc kubenswrapper[4860]: I0121 22:17:48.654215 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548"} err="failed to get container status \"4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548\": rpc error: code = NotFound desc = could not find container \"4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548\": container with ID starting with 4dbd0032cbaa43861ade1dba6d975ed062bd0069ab3c101dd680cfbd352a4548 not found: ID does not exist" Jan 21 22:17:50 crc kubenswrapper[4860]: I0121 22:17:50.590426 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" path="/var/lib/kubelet/pods/506cae55-bed3-4fc6-8748-0892d82d1f2f/volumes" Jan 21 22:19:32 crc kubenswrapper[4860]: I0121 22:19:32.103803 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:19:32 crc kubenswrapper[4860]: I0121 22:19:32.104796 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:20:02 crc kubenswrapper[4860]: I0121 22:20:02.103730 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:20:02 crc kubenswrapper[4860]: I0121 22:20:02.104509 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:20:32 crc kubenswrapper[4860]: I0121 22:20:32.103300 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:20:32 crc kubenswrapper[4860]: I0121 22:20:32.104018 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:20:32 crc kubenswrapper[4860]: I0121 22:20:32.104099 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 22:20:32 crc kubenswrapper[4860]: I0121 22:20:32.105377 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 22:20:32 crc kubenswrapper[4860]: I0121 22:20:32.105436 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" gracePeriod=600 Jan 21 22:20:32 crc kubenswrapper[4860]: E0121 22:20:32.238875 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:20:33 crc kubenswrapper[4860]: I0121 22:20:33.068443 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" exitCode=0 Jan 21 22:20:33 crc kubenswrapper[4860]: I0121 22:20:33.068517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128"} Jan 21 22:20:33 crc kubenswrapper[4860]: I0121 22:20:33.068645 4860 scope.go:117] "RemoveContainer" containerID="14d1af4226bf32fc74e96cd396a63c5e4ae53778ade27d795fe68f505e82570e" Jan 21 22:20:33 crc kubenswrapper[4860]: I0121 22:20:33.069531 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:20:33 crc kubenswrapper[4860]: E0121 22:20:33.069796 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:20:46 crc kubenswrapper[4860]: I0121 22:20:46.580685 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:20:46 crc kubenswrapper[4860]: E0121 22:20:46.581756 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:20:57 crc kubenswrapper[4860]: I0121 22:20:57.579335 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:20:57 crc kubenswrapper[4860]: E0121 22:20:57.580930 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:21:09 crc kubenswrapper[4860]: I0121 22:21:09.580096 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:21:09 crc kubenswrapper[4860]: E0121 22:21:09.581631 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:21:20 crc kubenswrapper[4860]: I0121 22:21:20.579572 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:21:20 crc kubenswrapper[4860]: E0121 22:21:20.583508 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:21:33 crc kubenswrapper[4860]: I0121 22:21:33.580084 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:21:33 crc kubenswrapper[4860]: E0121 22:21:33.581583 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:21:44 crc kubenswrapper[4860]: I0121 22:21:44.580487 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:21:44 crc kubenswrapper[4860]: E0121 22:21:44.582020 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:21:55 crc kubenswrapper[4860]: I0121 22:21:55.579035 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:21:55 crc kubenswrapper[4860]: E0121 22:21:55.580305 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:22:09 crc kubenswrapper[4860]: I0121 22:22:09.579031 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:22:09 crc kubenswrapper[4860]: E0121 22:22:09.580344 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:22:24 crc kubenswrapper[4860]: I0121 22:22:24.579460 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:22:24 crc kubenswrapper[4860]: E0121 22:22:24.581020 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:22:38 crc kubenswrapper[4860]: I0121 22:22:38.587191 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:22:38 crc kubenswrapper[4860]: E0121 22:22:38.588251 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:22:49 crc kubenswrapper[4860]: I0121 22:22:49.580633 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:22:49 crc kubenswrapper[4860]: E0121 22:22:49.581460 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:23:04 crc kubenswrapper[4860]: I0121 22:23:04.583326 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:23:04 crc kubenswrapper[4860]: E0121 22:23:04.584699 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:23:17 crc kubenswrapper[4860]: I0121 22:23:17.579346 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:23:17 crc kubenswrapper[4860]: E0121 22:23:17.580165 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:23:30 crc kubenswrapper[4860]: I0121 22:23:30.579240 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:23:30 crc kubenswrapper[4860]: E0121 22:23:30.580499 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:23:45 crc kubenswrapper[4860]: I0121 22:23:45.578848 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:23:45 crc kubenswrapper[4860]: E0121 22:23:45.580299 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:23:56 crc kubenswrapper[4860]: I0121 22:23:56.579440 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:23:56 crc kubenswrapper[4860]: E0121 22:23:56.581360 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:24:07 crc kubenswrapper[4860]: I0121 22:24:07.581226 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:24:07 crc kubenswrapper[4860]: E0121 22:24:07.582732 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:24:19 crc kubenswrapper[4860]: I0121 22:24:19.580288 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:24:19 crc kubenswrapper[4860]: E0121 22:24:19.581033 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:24:33 crc kubenswrapper[4860]: I0121 22:24:33.579312 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:24:33 crc kubenswrapper[4860]: E0121 22:24:33.580114 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:24:47 crc kubenswrapper[4860]: I0121 22:24:47.580156 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:24:47 crc kubenswrapper[4860]: E0121 22:24:47.582177 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:24:58 crc kubenswrapper[4860]: I0121 22:24:58.587275 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:24:58 crc kubenswrapper[4860]: E0121 22:24:58.588808 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:25:10 crc kubenswrapper[4860]: I0121 22:25:10.580672 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:25:10 crc kubenswrapper[4860]: E0121 22:25:10.581835 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:25:21 crc kubenswrapper[4860]: I0121 22:25:21.579284 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:25:21 crc kubenswrapper[4860]: E0121 22:25:21.581231 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w47lx_openshift-machine-config-operator(ebb59cca-ede6-44c6-850b-28d109e50dea)\"" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" Jan 21 22:25:32 crc kubenswrapper[4860]: I0121 22:25:32.579700 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:25:33 crc kubenswrapper[4860]: I0121 22:25:33.564209 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"aa897da72b91cbaa002f511f705dfec0a739c168e2e4ad90a0797beecc8b3c80"} Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.027400 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.028860 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.028884 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.028901 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.028907 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.028918 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.028945 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.028962 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.028968 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.028987 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.028993 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.029002 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029008 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.029021 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029028 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.029039 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029045 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="extract-utilities" Jan 21 22:26:56 crc kubenswrapper[4860]: E0121 22:26:56.029058 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029064 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="extract-content" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029264 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e099029-38de-4927-8846-211306e67ca3" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029291 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca6b0c5-b760-4151-9b2e-447d8c2b631d" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.029312 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="506cae55-bed3-4fc6-8748-0892d82d1f2f" containerName="registry-server" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.030754 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.059647 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.130627 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsbm\" (UniqueName: \"kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.130740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.131221 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.233368 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.233467 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsbm\" (UniqueName: \"kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.233556 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.234383 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.234445 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.260338 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsbm\" (UniqueName: \"kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm\") pod \"redhat-operators-qfcn7\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.356400 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:26:56 crc kubenswrapper[4860]: I0121 22:26:56.860437 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:26:57 crc kubenswrapper[4860]: I0121 22:26:57.669377 4860 generic.go:334] "Generic (PLEG): container finished" podID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerID="a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8" exitCode=0 Jan 21 22:26:57 crc kubenswrapper[4860]: I0121 22:26:57.669483 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerDied","Data":"a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8"} Jan 21 22:26:57 crc kubenswrapper[4860]: I0121 22:26:57.669870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerStarted","Data":"dcee47d192973368105c61ef1d4000dc8fafc88bf59c9b469e37a8b6e8b6ec06"} Jan 21 22:26:57 crc kubenswrapper[4860]: I0121 22:26:57.672459 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 22:26:59 crc kubenswrapper[4860]: I0121 22:26:59.693910 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerStarted","Data":"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948"} Jan 21 22:27:00 crc kubenswrapper[4860]: I0121 22:27:00.705094 4860 generic.go:334] "Generic (PLEG): container finished" podID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerID="c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948" exitCode=0 Jan 21 22:27:00 crc kubenswrapper[4860]: I0121 22:27:00.705208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerDied","Data":"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948"} Jan 21 22:27:02 crc kubenswrapper[4860]: I0121 22:27:02.726448 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerStarted","Data":"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4"} Jan 21 22:27:02 crc kubenswrapper[4860]: I0121 22:27:02.752532 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qfcn7" podStartSLOduration=4.277835575 podStartE2EDuration="7.752494033s" podCreationTimestamp="2026-01-21 22:26:55 +0000 UTC" firstStartedPulling="2026-01-21 22:26:57.672138795 +0000 UTC m=+4709.894317265" lastFinishedPulling="2026-01-21 22:27:01.146797253 +0000 UTC m=+4713.368975723" observedRunningTime="2026-01-21 22:27:02.744733252 +0000 UTC m=+4714.966911732" watchObservedRunningTime="2026-01-21 22:27:02.752494033 +0000 UTC m=+4714.974672503" Jan 21 22:27:06 crc kubenswrapper[4860]: I0121 22:27:06.357309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:06 crc kubenswrapper[4860]: I0121 22:27:06.357892 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:07 crc kubenswrapper[4860]: I0121 22:27:07.416774 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qfcn7" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="registry-server" probeResult="failure" output=< Jan 21 22:27:07 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 21 22:27:07 crc kubenswrapper[4860]: > Jan 21 22:27:16 crc kubenswrapper[4860]: I0121 22:27:16.414363 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:16 crc kubenswrapper[4860]: I0121 22:27:16.481827 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:19 crc kubenswrapper[4860]: I0121 22:27:19.823156 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:27:19 crc kubenswrapper[4860]: I0121 22:27:19.824067 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qfcn7" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="registry-server" containerID="cri-o://f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4" gracePeriod=2 Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.369391 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.500812 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsbm\" (UniqueName: \"kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm\") pod \"44513c9a-6d9f-4086-b08f-4e8502cfae66\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.500930 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content\") pod \"44513c9a-6d9f-4086-b08f-4e8502cfae66\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.501000 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities\") pod \"44513c9a-6d9f-4086-b08f-4e8502cfae66\" (UID: \"44513c9a-6d9f-4086-b08f-4e8502cfae66\") " Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.502304 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities" (OuterVolumeSpecName: "utilities") pod "44513c9a-6d9f-4086-b08f-4e8502cfae66" (UID: "44513c9a-6d9f-4086-b08f-4e8502cfae66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.527238 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm" (OuterVolumeSpecName: "kube-api-access-tnsbm") pod "44513c9a-6d9f-4086-b08f-4e8502cfae66" (UID: "44513c9a-6d9f-4086-b08f-4e8502cfae66"). InnerVolumeSpecName "kube-api-access-tnsbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.603010 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsbm\" (UniqueName: \"kubernetes.io/projected/44513c9a-6d9f-4086-b08f-4e8502cfae66-kube-api-access-tnsbm\") on node \"crc\" DevicePath \"\"" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.603050 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.668965 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44513c9a-6d9f-4086-b08f-4e8502cfae66" (UID: "44513c9a-6d9f-4086-b08f-4e8502cfae66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.704955 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513c9a-6d9f-4086-b08f-4e8502cfae66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.908554 4860 generic.go:334] "Generic (PLEG): container finished" podID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerID="f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4" exitCode=0 Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.908625 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerDied","Data":"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4"} Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.908668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfcn7" event={"ID":"44513c9a-6d9f-4086-b08f-4e8502cfae66","Type":"ContainerDied","Data":"dcee47d192973368105c61ef1d4000dc8fafc88bf59c9b469e37a8b6e8b6ec06"} Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.908697 4860 scope.go:117] "RemoveContainer" containerID="f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.910160 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfcn7" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.970765 4860 scope.go:117] "RemoveContainer" containerID="c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948" Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.981410 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:27:20 crc kubenswrapper[4860]: I0121 22:27:20.994638 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qfcn7"] Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.010090 4860 scope.go:117] "RemoveContainer" containerID="a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.035852 4860 scope.go:117] "RemoveContainer" containerID="f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4" Jan 21 22:27:21 crc kubenswrapper[4860]: E0121 22:27:21.043089 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4\": container with ID starting with f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4 not found: ID does not exist" containerID="f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.043245 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4"} err="failed to get container status \"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4\": rpc error: code = NotFound desc = could not find container \"f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4\": container with ID starting with f6fd018cb2b4dba2f2b0cc63911e07f9d6f1e34d392c7b8692cfe33c5da23bc4 not found: ID does not exist" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.043306 4860 scope.go:117] "RemoveContainer" containerID="c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948" Jan 21 22:27:21 crc kubenswrapper[4860]: E0121 22:27:21.047200 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948\": container with ID starting with c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948 not found: ID does not exist" containerID="c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.047256 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948"} err="failed to get container status \"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948\": rpc error: code = NotFound desc = could not find container \"c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948\": container with ID starting with c76d0745968e6d51b69c7b7555a7cde9c1abcb053bf5e4a2062e5c4b6f907948 not found: ID does not exist" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.047291 4860 scope.go:117] "RemoveContainer" containerID="a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8" Jan 21 22:27:21 crc kubenswrapper[4860]: E0121 22:27:21.047830 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8\": container with ID starting with a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8 not found: ID does not exist" containerID="a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8" Jan 21 22:27:21 crc kubenswrapper[4860]: I0121 22:27:21.047862 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8"} err="failed to get container status \"a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8\": rpc error: code = NotFound desc = could not find container \"a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8\": container with ID starting with a0c6ccdcc07d57e705ef7536bc78bafdfe7818fa34ea5a07b457d05c674299a8 not found: ID does not exist" Jan 21 22:27:22 crc kubenswrapper[4860]: I0121 22:27:22.593419 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" path="/var/lib/kubelet/pods/44513c9a-6d9f-4086-b08f-4e8502cfae66/volumes" Jan 21 22:27:32 crc kubenswrapper[4860]: I0121 22:27:32.103576 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:27:32 crc kubenswrapper[4860]: I0121 22:27:32.104435 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:28:02 crc kubenswrapper[4860]: I0121 22:28:02.103118 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:28:02 crc kubenswrapper[4860]: I0121 22:28:02.103722 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.635499 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:08 crc kubenswrapper[4860]: E0121 22:28:08.636750 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="extract-utilities" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.636770 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="extract-utilities" Jan 21 22:28:08 crc kubenswrapper[4860]: E0121 22:28:08.636792 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="registry-server" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.636798 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="registry-server" Jan 21 22:28:08 crc kubenswrapper[4860]: E0121 22:28:08.636823 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="extract-content" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.636829 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="extract-content" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.637069 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="44513c9a-6d9f-4086-b08f-4e8502cfae66" containerName="registry-server" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.638530 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.656561 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.759908 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.760018 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7ll\" (UniqueName: \"kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.760456 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.862220 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.862299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7ll\" (UniqueName: \"kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.862425 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.863175 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.863295 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.892296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7ll\" (UniqueName: \"kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll\") pod \"community-operators-gjqx8\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:08 crc kubenswrapper[4860]: I0121 22:28:08.965778 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:09 crc kubenswrapper[4860]: I0121 22:28:09.532573 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:09 crc kubenswrapper[4860]: I0121 22:28:09.589815 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerStarted","Data":"5cd4e8b2051153cf864de812e42770f7769f2baccd9bc89c0f674c9c8dce0b10"} Jan 21 22:28:10 crc kubenswrapper[4860]: I0121 22:28:10.600257 4860 generic.go:334] "Generic (PLEG): container finished" podID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerID="ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4" exitCode=0 Jan 21 22:28:10 crc kubenswrapper[4860]: I0121 22:28:10.600346 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerDied","Data":"ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4"} Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.613650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerStarted","Data":"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8"} Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.629417 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.631906 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.659269 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.765744 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.766268 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdmj\" (UniqueName: \"kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.766363 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.868609 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.868721 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdmj\" (UniqueName: \"kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.868802 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.869459 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.869530 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.893036 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdmj\" (UniqueName: \"kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj\") pod \"certified-operators-r5pjw\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:11 crc kubenswrapper[4860]: I0121 22:28:11.951421 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:12 crc kubenswrapper[4860]: I0121 22:28:12.546540 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:12 crc kubenswrapper[4860]: I0121 22:28:12.627947 4860 generic.go:334] "Generic (PLEG): container finished" podID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerID="98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8" exitCode=0 Jan 21 22:28:12 crc kubenswrapper[4860]: I0121 22:28:12.628172 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerDied","Data":"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8"} Jan 21 22:28:12 crc kubenswrapper[4860]: I0121 22:28:12.633686 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerStarted","Data":"0269ef44d93f9b653535f7e99dc80ac900f26601e6735bf5108a3b876208bbe9"} Jan 21 22:28:13 crc kubenswrapper[4860]: I0121 22:28:13.657798 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerStarted","Data":"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e"} Jan 21 22:28:13 crc kubenswrapper[4860]: I0121 22:28:13.662649 4860 generic.go:334] "Generic (PLEG): container finished" podID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerID="8835c072af81d30d42fb26e25ce9caa4f7ea225f451f324faa7968b3f3ae8344" exitCode=0 Jan 21 22:28:13 crc kubenswrapper[4860]: I0121 22:28:13.662724 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerDied","Data":"8835c072af81d30d42fb26e25ce9caa4f7ea225f451f324faa7968b3f3ae8344"} Jan 21 22:28:13 crc kubenswrapper[4860]: I0121 22:28:13.697275 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjqx8" podStartSLOduration=3.255963883 podStartE2EDuration="5.697232994s" podCreationTimestamp="2026-01-21 22:28:08 +0000 UTC" firstStartedPulling="2026-01-21 22:28:10.602580575 +0000 UTC m=+4782.824759045" lastFinishedPulling="2026-01-21 22:28:13.043849686 +0000 UTC m=+4785.266028156" observedRunningTime="2026-01-21 22:28:13.688037351 +0000 UTC m=+4785.910215821" watchObservedRunningTime="2026-01-21 22:28:13.697232994 +0000 UTC m=+4785.919411454" Jan 21 22:28:14 crc kubenswrapper[4860]: I0121 22:28:14.676358 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerStarted","Data":"71e2b12b38fd3823c1dc2882470c17607a3367c2d420063dadcce881536453b4"} Jan 21 22:28:15 crc kubenswrapper[4860]: I0121 22:28:15.686190 4860 generic.go:334] "Generic (PLEG): container finished" podID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerID="71e2b12b38fd3823c1dc2882470c17607a3367c2d420063dadcce881536453b4" exitCode=0 Jan 21 22:28:15 crc kubenswrapper[4860]: I0121 22:28:15.686693 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerDied","Data":"71e2b12b38fd3823c1dc2882470c17607a3367c2d420063dadcce881536453b4"} Jan 21 22:28:16 crc kubenswrapper[4860]: I0121 22:28:16.698830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerStarted","Data":"7bfce0d1f16ac14c2f25dc81a75d3074ee0f6bcf62c190ac3c086ebb9b257404"} Jan 21 22:28:16 crc kubenswrapper[4860]: I0121 22:28:16.725954 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5pjw" podStartSLOduration=3.329853383 podStartE2EDuration="5.725914906s" podCreationTimestamp="2026-01-21 22:28:11 +0000 UTC" firstStartedPulling="2026-01-21 22:28:13.666613548 +0000 UTC m=+4785.888792028" lastFinishedPulling="2026-01-21 22:28:16.062675081 +0000 UTC m=+4788.284853551" observedRunningTime="2026-01-21 22:28:16.724080798 +0000 UTC m=+4788.946259278" watchObservedRunningTime="2026-01-21 22:28:16.725914906 +0000 UTC m=+4788.948093376" Jan 21 22:28:18 crc kubenswrapper[4860]: I0121 22:28:18.966089 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:18 crc kubenswrapper[4860]: I0121 22:28:18.966524 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:19 crc kubenswrapper[4860]: I0121 22:28:19.021631 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:19 crc kubenswrapper[4860]: I0121 22:28:19.789711 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:21 crc kubenswrapper[4860]: I0121 22:28:21.414285 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:21 crc kubenswrapper[4860]: I0121 22:28:21.741319 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjqx8" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="registry-server" containerID="cri-o://ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e" gracePeriod=2 Jan 21 22:28:21 crc kubenswrapper[4860]: I0121 22:28:21.952137 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:21 crc kubenswrapper[4860]: I0121 22:28:21.952368 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.009854 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.226218 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.317548 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content\") pod \"3da1ba22-5b02-43be-be12-2f370d0b2fee\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.317665 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7ll\" (UniqueName: \"kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll\") pod \"3da1ba22-5b02-43be-be12-2f370d0b2fee\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.317796 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities\") pod \"3da1ba22-5b02-43be-be12-2f370d0b2fee\" (UID: \"3da1ba22-5b02-43be-be12-2f370d0b2fee\") " Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.319265 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities" (OuterVolumeSpecName: "utilities") pod "3da1ba22-5b02-43be-be12-2f370d0b2fee" (UID: "3da1ba22-5b02-43be-be12-2f370d0b2fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.327109 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll" (OuterVolumeSpecName: "kube-api-access-dw7ll") pod "3da1ba22-5b02-43be-be12-2f370d0b2fee" (UID: "3da1ba22-5b02-43be-be12-2f370d0b2fee"). InnerVolumeSpecName "kube-api-access-dw7ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.419755 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.419795 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7ll\" (UniqueName: \"kubernetes.io/projected/3da1ba22-5b02-43be-be12-2f370d0b2fee-kube-api-access-dw7ll\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.436347 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da1ba22-5b02-43be-be12-2f370d0b2fee" (UID: "3da1ba22-5b02-43be-be12-2f370d0b2fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.521679 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da1ba22-5b02-43be-be12-2f370d0b2fee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.754571 4860 generic.go:334] "Generic (PLEG): container finished" podID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerID="ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e" exitCode=0 Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.754645 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqx8" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.754649 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerDied","Data":"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e"} Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.754701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqx8" event={"ID":"3da1ba22-5b02-43be-be12-2f370d0b2fee","Type":"ContainerDied","Data":"5cd4e8b2051153cf864de812e42770f7769f2baccd9bc89c0f674c9c8dce0b10"} Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.754727 4860 scope.go:117] "RemoveContainer" containerID="ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.785592 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.791789 4860 scope.go:117] "RemoveContainer" containerID="98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.802836 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjqx8"] Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.823031 4860 scope.go:117] "RemoveContainer" containerID="ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.847917 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.860644 4860 scope.go:117] "RemoveContainer" containerID="ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e" Jan 21 22:28:22 crc kubenswrapper[4860]: E0121 22:28:22.861253 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e\": container with ID starting with ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e not found: ID does not exist" containerID="ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.861291 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e"} err="failed to get container status \"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e\": rpc error: code = NotFound desc = could not find container \"ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e\": container with ID starting with ca91001420eaa8c1791e6802304ec2e109c85f09e12b5c6a6e325c7bbbdf674e not found: ID does not exist" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.861321 4860 scope.go:117] "RemoveContainer" containerID="98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8" Jan 21 22:28:22 crc kubenswrapper[4860]: E0121 22:28:22.861907 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8\": container with ID starting with 98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8 not found: ID does not exist" containerID="98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.861951 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8"} err="failed to get container status \"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8\": rpc error: code = NotFound desc = could not find container \"98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8\": container with ID starting with 98768c0a4f96daba57efe7a29083d140b859bb047f3d889d94f954ad954398d8 not found: ID does not exist" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.861972 4860 scope.go:117] "RemoveContainer" containerID="ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4" Jan 21 22:28:22 crc kubenswrapper[4860]: E0121 22:28:22.862228 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4\": container with ID starting with ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4 not found: ID does not exist" containerID="ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4" Jan 21 22:28:22 crc kubenswrapper[4860]: I0121 22:28:22.862258 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4"} err="failed to get container status \"ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4\": rpc error: code = NotFound desc = could not find container \"ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4\": container with ID starting with ada50ce2f868a2d7f13ed17b77b7ec04f1e791a6633d747545ee6d15a068d1f4 not found: ID does not exist" Jan 21 22:28:24 crc kubenswrapper[4860]: I0121 22:28:24.405244 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:24 crc kubenswrapper[4860]: I0121 22:28:24.593319 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" path="/var/lib/kubelet/pods/3da1ba22-5b02-43be-be12-2f370d0b2fee/volumes" Jan 21 22:28:25 crc kubenswrapper[4860]: I0121 22:28:25.782453 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5pjw" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="registry-server" containerID="cri-o://7bfce0d1f16ac14c2f25dc81a75d3074ee0f6bcf62c190ac3c086ebb9b257404" gracePeriod=2 Jan 21 22:28:26 crc kubenswrapper[4860]: I0121 22:28:26.796151 4860 generic.go:334] "Generic (PLEG): container finished" podID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerID="7bfce0d1f16ac14c2f25dc81a75d3074ee0f6bcf62c190ac3c086ebb9b257404" exitCode=0 Jan 21 22:28:26 crc kubenswrapper[4860]: I0121 22:28:26.796223 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerDied","Data":"7bfce0d1f16ac14c2f25dc81a75d3074ee0f6bcf62c190ac3c086ebb9b257404"} Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.368132 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.427322 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities\") pod \"928ca598-da79-445f-8e4e-c6ad5f65dd02\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.427729 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xdmj\" (UniqueName: \"kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj\") pod \"928ca598-da79-445f-8e4e-c6ad5f65dd02\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.427859 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content\") pod \"928ca598-da79-445f-8e4e-c6ad5f65dd02\" (UID: \"928ca598-da79-445f-8e4e-c6ad5f65dd02\") " Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.431879 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities" (OuterVolumeSpecName: "utilities") pod "928ca598-da79-445f-8e4e-c6ad5f65dd02" (UID: "928ca598-da79-445f-8e4e-c6ad5f65dd02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.442544 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj" (OuterVolumeSpecName: "kube-api-access-6xdmj") pod "928ca598-da79-445f-8e4e-c6ad5f65dd02" (UID: "928ca598-da79-445f-8e4e-c6ad5f65dd02"). InnerVolumeSpecName "kube-api-access-6xdmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.477966 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "928ca598-da79-445f-8e4e-c6ad5f65dd02" (UID: "928ca598-da79-445f-8e4e-c6ad5f65dd02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.530572 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.530632 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xdmj\" (UniqueName: \"kubernetes.io/projected/928ca598-da79-445f-8e4e-c6ad5f65dd02-kube-api-access-6xdmj\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.530649 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928ca598-da79-445f-8e4e-c6ad5f65dd02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.807666 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5pjw" event={"ID":"928ca598-da79-445f-8e4e-c6ad5f65dd02","Type":"ContainerDied","Data":"0269ef44d93f9b653535f7e99dc80ac900f26601e6735bf5108a3b876208bbe9"} Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.808896 4860 scope.go:117] "RemoveContainer" containerID="7bfce0d1f16ac14c2f25dc81a75d3074ee0f6bcf62c190ac3c086ebb9b257404" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.807745 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5pjw" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.836166 4860 scope.go:117] "RemoveContainer" containerID="71e2b12b38fd3823c1dc2882470c17607a3367c2d420063dadcce881536453b4" Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.850557 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.872541 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5pjw"] Jan 21 22:28:27 crc kubenswrapper[4860]: I0121 22:28:27.880060 4860 scope.go:117] "RemoveContainer" containerID="8835c072af81d30d42fb26e25ce9caa4f7ea225f451f324faa7968b3f3ae8344" Jan 21 22:28:28 crc kubenswrapper[4860]: I0121 22:28:28.595832 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" path="/var/lib/kubelet/pods/928ca598-da79-445f-8e4e-c6ad5f65dd02/volumes" Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.104550 4860 patch_prober.go:28] interesting pod/machine-config-daemon-w47lx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.104992 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.105078 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.106053 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa897da72b91cbaa002f511f705dfec0a739c168e2e4ad90a0797beecc8b3c80"} pod="openshift-machine-config-operator/machine-config-daemon-w47lx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.106107 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" podUID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerName="machine-config-daemon" containerID="cri-o://aa897da72b91cbaa002f511f705dfec0a739c168e2e4ad90a0797beecc8b3c80" gracePeriod=600 Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.856331 4860 generic.go:334] "Generic (PLEG): container finished" podID="ebb59cca-ede6-44c6-850b-28d109e50dea" containerID="aa897da72b91cbaa002f511f705dfec0a739c168e2e4ad90a0797beecc8b3c80" exitCode=0 Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.856411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerDied","Data":"aa897da72b91cbaa002f511f705dfec0a739c168e2e4ad90a0797beecc8b3c80"} Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.856993 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w47lx" event={"ID":"ebb59cca-ede6-44c6-850b-28d109e50dea","Type":"ContainerStarted","Data":"7dd3977fecce81d0f64fd530ad719e61ab35f518ab4b62e7f6d2eb3401e81e48"} Jan 21 22:28:32 crc kubenswrapper[4860]: I0121 22:28:32.857018 4860 scope.go:117] "RemoveContainer" containerID="667b7d53d44c8379a2bdd89bd309599a36230dcb6fd16159826bde07bc015128" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.827683 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829471 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="extract-utilities" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829514 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="extract-utilities" Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829597 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829612 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829632 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="extract-content" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829645 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="extract-content" Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829661 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="extract-utilities" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829679 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="extract-utilities" Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829700 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="extract-content" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829715 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="extract-content" Jan 21 22:29:04 crc kubenswrapper[4860]: E0121 22:29:04.829741 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.829755 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.830248 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da1ba22-5b02-43be-be12-2f370d0b2fee" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.830277 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="928ca598-da79-445f-8e4e-c6ad5f65dd02" containerName="registry-server" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.833240 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:04 crc kubenswrapper[4860]: I0121 22:29:04.841709 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.008185 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.008310 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.008427 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzv6l\" (UniqueName: \"kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.180641 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.181234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzv6l\" (UniqueName: \"kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.181538 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.181540 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.181783 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.205661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzv6l\" (UniqueName: \"kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l\") pod \"redhat-marketplace-jmszt\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.479229 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:05 crc kubenswrapper[4860]: I0121 22:29:05.997364 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:06 crc kubenswrapper[4860]: I0121 22:29:06.263697 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerStarted","Data":"92de76c7a585146f21584ffc078689b859bc7b531d55a6f57fe7b77465cf2838"} Jan 21 22:29:06 crc kubenswrapper[4860]: I0121 22:29:06.264186 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerStarted","Data":"0a4bc0d62632f2dce8eb5adf4d0a90155721cd249f39ff5a42bb6f0c18ba4d15"} Jan 21 22:29:07 crc kubenswrapper[4860]: I0121 22:29:07.273183 4860 generic.go:334] "Generic (PLEG): container finished" podID="654c6725-4e8f-4d50-b701-b29af5731dea" containerID="92de76c7a585146f21584ffc078689b859bc7b531d55a6f57fe7b77465cf2838" exitCode=0 Jan 21 22:29:07 crc kubenswrapper[4860]: I0121 22:29:07.273245 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerDied","Data":"92de76c7a585146f21584ffc078689b859bc7b531d55a6f57fe7b77465cf2838"} Jan 21 22:29:08 crc kubenswrapper[4860]: I0121 22:29:08.289414 4860 generic.go:334] "Generic (PLEG): container finished" podID="654c6725-4e8f-4d50-b701-b29af5731dea" containerID="0ed1dbb7d05b5f5951a6781af7bc0340b7b080384f11a594a670ed1c628947a2" exitCode=0 Jan 21 22:29:08 crc kubenswrapper[4860]: I0121 22:29:08.289493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerDied","Data":"0ed1dbb7d05b5f5951a6781af7bc0340b7b080384f11a594a670ed1c628947a2"} Jan 21 22:29:09 crc kubenswrapper[4860]: I0121 22:29:09.304700 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerStarted","Data":"f4b9c30c378f75dc0e30d656487c3b5806f3d7761c413c3b2bf1cce8f30a4f8f"} Jan 21 22:29:09 crc kubenswrapper[4860]: I0121 22:29:09.339647 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jmszt" podStartSLOduration=3.907799594 podStartE2EDuration="5.339614508s" podCreationTimestamp="2026-01-21 22:29:04 +0000 UTC" firstStartedPulling="2026-01-21 22:29:07.275837157 +0000 UTC m=+4839.498015627" lastFinishedPulling="2026-01-21 22:29:08.707652051 +0000 UTC m=+4840.929830541" observedRunningTime="2026-01-21 22:29:09.329715212 +0000 UTC m=+4841.551893702" watchObservedRunningTime="2026-01-21 22:29:09.339614508 +0000 UTC m=+4841.561792978" Jan 21 22:29:15 crc kubenswrapper[4860]: I0121 22:29:15.480076 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:15 crc kubenswrapper[4860]: I0121 22:29:15.481846 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:15 crc kubenswrapper[4860]: I0121 22:29:15.538776 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:16 crc kubenswrapper[4860]: I0121 22:29:16.428555 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:19 crc kubenswrapper[4860]: I0121 22:29:19.013607 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:19 crc kubenswrapper[4860]: I0121 22:29:19.404310 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jmszt" podUID="654c6725-4e8f-4d50-b701-b29af5731dea" containerName="registry-server" containerID="cri-o://f4b9c30c378f75dc0e30d656487c3b5806f3d7761c413c3b2bf1cce8f30a4f8f" gracePeriod=2 Jan 21 22:29:20 crc kubenswrapper[4860]: I0121 22:29:20.416157 4860 generic.go:334] "Generic (PLEG): container finished" podID="654c6725-4e8f-4d50-b701-b29af5731dea" containerID="f4b9c30c378f75dc0e30d656487c3b5806f3d7761c413c3b2bf1cce8f30a4f8f" exitCode=0 Jan 21 22:29:20 crc kubenswrapper[4860]: I0121 22:29:20.416201 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerDied","Data":"f4b9c30c378f75dc0e30d656487c3b5806f3d7761c413c3b2bf1cce8f30a4f8f"} Jan 21 22:29:20 crc kubenswrapper[4860]: I0121 22:29:20.934729 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.012413 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzv6l\" (UniqueName: \"kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l\") pod \"654c6725-4e8f-4d50-b701-b29af5731dea\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.012488 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content\") pod \"654c6725-4e8f-4d50-b701-b29af5731dea\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.012686 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities\") pod \"654c6725-4e8f-4d50-b701-b29af5731dea\" (UID: \"654c6725-4e8f-4d50-b701-b29af5731dea\") " Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.015598 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities" (OuterVolumeSpecName: "utilities") pod "654c6725-4e8f-4d50-b701-b29af5731dea" (UID: "654c6725-4e8f-4d50-b701-b29af5731dea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.023127 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l" (OuterVolumeSpecName: "kube-api-access-vzv6l") pod "654c6725-4e8f-4d50-b701-b29af5731dea" (UID: "654c6725-4e8f-4d50-b701-b29af5731dea"). InnerVolumeSpecName "kube-api-access-vzv6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.050385 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "654c6725-4e8f-4d50-b701-b29af5731dea" (UID: "654c6725-4e8f-4d50-b701-b29af5731dea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.115276 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzv6l\" (UniqueName: \"kubernetes.io/projected/654c6725-4e8f-4d50-b701-b29af5731dea-kube-api-access-vzv6l\") on node \"crc\" DevicePath \"\"" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.115679 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.115694 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654c6725-4e8f-4d50-b701-b29af5731dea-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.427714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmszt" event={"ID":"654c6725-4e8f-4d50-b701-b29af5731dea","Type":"ContainerDied","Data":"0a4bc0d62632f2dce8eb5adf4d0a90155721cd249f39ff5a42bb6f0c18ba4d15"} Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.427780 4860 scope.go:117] "RemoveContainer" containerID="f4b9c30c378f75dc0e30d656487c3b5806f3d7761c413c3b2bf1cce8f30a4f8f" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.427910 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmszt" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.470093 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.470656 4860 scope.go:117] "RemoveContainer" containerID="0ed1dbb7d05b5f5951a6781af7bc0340b7b080384f11a594a670ed1c628947a2" Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.477548 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmszt"] Jan 21 22:29:21 crc kubenswrapper[4860]: I0121 22:29:21.490760 4860 scope.go:117] "RemoveContainer" containerID="92de76c7a585146f21584ffc078689b859bc7b531d55a6f57fe7b77465cf2838" Jan 21 22:29:22 crc kubenswrapper[4860]: I0121 22:29:22.593513 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654c6725-4e8f-4d50-b701-b29af5731dea" path="/var/lib/kubelet/pods/654c6725-4e8f-4d50-b701-b29af5731dea/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134251557024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134251560017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134237600016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134237600015456 5ustar corecore